Test Report: Docker_Linux_crio 21975

                    
                      bf5d9cb38ae1a2b3e4a9e22e363e3b0c86085c7c:2025-11-24:42481
                    
                

Test fail (37/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.25
35 TestAddons/parallel/Registry 12.61
36 TestAddons/parallel/RegistryCreds 0.4
37 TestAddons/parallel/Ingress 146.68
38 TestAddons/parallel/InspektorGadget 5.25
39 TestAddons/parallel/MetricsServer 5.31
41 TestAddons/parallel/CSI 40.11
42 TestAddons/parallel/Headlamp 2.47
43 TestAddons/parallel/CloudSpanner 5.26
44 TestAddons/parallel/LocalPath 11.09
45 TestAddons/parallel/NvidiaDevicePlugin 5.24
46 TestAddons/parallel/Yakd 6.25
47 TestAddons/parallel/AmdGpuDevicePlugin 6.25
97 TestFunctional/parallel/ServiceCmdConnect 602.73
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.06
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.15
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.1
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.21
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
146 TestFunctional/parallel/ServiceCmd/DeployApp 600.55
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
153 TestFunctional/parallel/ServiceCmd/Format 0.52
154 TestFunctional/parallel/ServiceCmd/URL 0.52
191 TestJSONOutput/pause/Command 1.66
197 TestJSONOutput/unpause/Command 1.73
268 TestPause/serial/Pause 6
349 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.2
350 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.25
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.15
356 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.22
366 TestStartStop/group/newest-cni/serial/Pause 5.93
376 TestStartStop/group/old-k8s-version/serial/Pause 6.01
381 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.12
385 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.23
387 TestStartStop/group/no-preload/serial/Pause 5.87
393 TestStartStop/group/embed-certs/serial/Pause 5.12
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable volcano --alsologtostderr -v=1: exit status 11 (251.044457ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:26:16.716731  358432 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:26:16.716848  358432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:16.716859  358432 out.go:374] Setting ErrFile to fd 2...
	I1124 02:26:16.716864  358432 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:16.717054  358432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:26:16.717328  358432 mustload.go:66] Loading cluster: addons-831846
	I1124 02:26:16.717673  358432 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:16.717689  358432 addons.go:622] checking whether the cluster is paused
	I1124 02:26:16.717767  358432 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:16.717779  358432 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:26:16.718132  358432 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:26:16.736619  358432 ssh_runner.go:195] Run: systemctl --version
	I1124 02:26:16.736675  358432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:26:16.754164  358432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:26:16.849881  358432 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:26:16.849984  358432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:26:16.878346  358432 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:26:16.878383  358432 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:26:16.878391  358432 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:26:16.878395  358432 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:26:16.878401  358432 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:26:16.878408  358432 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:26:16.878420  358432 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:26:16.878429  358432 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:26:16.878432  358432 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:26:16.878442  358432 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:26:16.878448  358432 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:26:16.878451  358432 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:26:16.878454  358432 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:26:16.878456  358432 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:26:16.878459  358432 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:26:16.878476  358432 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:26:16.878483  358432 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:26:16.878487  358432 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:26:16.878490  358432 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:26:16.878492  358432 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:26:16.878498  358432 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:26:16.878500  358432 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:26:16.878503  358432 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:26:16.878506  358432 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:26:16.878508  358432 cri.go:89] found id: ""
	I1124 02:26:16.878548  358432 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:26:16.892686  358432 out.go:203] 
	W1124 02:26:16.893720  358432 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:26:16.893735  358432 out.go:285] * 
	* 
	W1124 02:26:16.897742  358432 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:26:16.898778  358432 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (12.61s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.295843ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-fmpk9" [f51b7a5d-73cd-404e-87db-f7b56c46e8fc] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00275609s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-qnxkh" [492c9528-7caf-47b5-86ce-62e1cf455391] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002345378s
addons_test.go:392: (dbg) Run:  kubectl --context addons-831846 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-831846 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-831846 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (2.163248456s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 ip
2025/11/24 02:26:38 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable registry --alsologtostderr -v=1: exit status 11 (243.778536ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:26:38.072231  360228 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:26:38.072473  360228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:38.072481  360228 out.go:374] Setting ErrFile to fd 2...
	I1124 02:26:38.072486  360228 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:38.072650  360228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:26:38.072911  360228 mustload.go:66] Loading cluster: addons-831846
	I1124 02:26:38.073249  360228 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:38.073264  360228 addons.go:622] checking whether the cluster is paused
	I1124 02:26:38.073342  360228 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:38.073354  360228 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:26:38.073696  360228 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:26:38.092434  360228 ssh_runner.go:195] Run: systemctl --version
	I1124 02:26:38.092496  360228 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:26:38.109061  360228 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:26:38.205147  360228 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:26:38.205233  360228 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:26:38.233478  360228 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:26:38.233500  360228 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:26:38.233505  360228 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:26:38.233511  360228 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:26:38.233515  360228 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:26:38.233520  360228 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:26:38.233525  360228 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:26:38.233529  360228 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:26:38.233534  360228 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:26:38.233541  360228 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:26:38.233549  360228 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:26:38.233554  360228 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:26:38.233558  360228 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:26:38.233563  360228 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:26:38.233568  360228 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:26:38.233578  360228 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:26:38.233581  360228 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:26:38.233611  360228 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:26:38.233618  360228 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:26:38.233621  360228 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:26:38.233624  360228 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:26:38.233627  360228 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:26:38.233630  360228 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:26:38.233633  360228 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:26:38.233635  360228 cri.go:89] found id: ""
	I1124 02:26:38.233667  360228 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:26:38.246953  360228 out.go:203] 
	W1124 02:26:38.247939  360228 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:38Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:38Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:26:38.247961  360228 out.go:285] * 
	* 
	W1124 02:26:38.251764  360228 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:26:38.252842  360228 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (12.61s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.4s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.731728ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-831846
addons_test.go:332: (dbg) Run:  kubectl --context addons-831846 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (241.173508ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:26:48.501375  361954 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:26:48.501481  361954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:48.501490  361954 out.go:374] Setting ErrFile to fd 2...
	I1124 02:26:48.501493  361954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:48.501677  361954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:26:48.501946  361954 mustload.go:66] Loading cluster: addons-831846
	I1124 02:26:48.502286  361954 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:48.502303  361954 addons.go:622] checking whether the cluster is paused
	I1124 02:26:48.502382  361954 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:48.502394  361954 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:26:48.502724  361954 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:26:48.519424  361954 ssh_runner.go:195] Run: systemctl --version
	I1124 02:26:48.519494  361954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:26:48.537710  361954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:26:48.634978  361954 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:26:48.635065  361954 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:26:48.663618  361954 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:26:48.663650  361954 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:26:48.663657  361954 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:26:48.663661  361954 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:26:48.663665  361954 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:26:48.663670  361954 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:26:48.663675  361954 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:26:48.663679  361954 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:26:48.663683  361954 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:26:48.663690  361954 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:26:48.663700  361954 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:26:48.663705  361954 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:26:48.663714  361954 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:26:48.663719  361954 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:26:48.663726  361954 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:26:48.663733  361954 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:26:48.663740  361954 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:26:48.663744  361954 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:26:48.663746  361954 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:26:48.663749  361954 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:26:48.663755  361954 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:26:48.663758  361954 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:26:48.663764  361954 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:26:48.663767  361954 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:26:48.663769  361954 cri.go:89] found id: ""
	I1124 02:26:48.663804  361954 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:26:48.677048  361954 out.go:203] 
	W1124 02:26:48.678363  361954 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:26:48.678379  361954 out.go:285] * 
	* 
	W1124 02:26:48.682279  361954 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:26:48.683662  361954 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-831846 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-831846 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-831846 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [0a9aa95a-69f0-420d-95b8-a69a4d15df1a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [0a9aa95a-69f0-420d-95b8-a69a4d15df1a] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004083252s
I1124 02:26:50.060602  349078 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.317310791s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-831846 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-831846
helpers_test.go:243: (dbg) docker inspect addons-831846:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2bbbb8de094b319ef64e68dbe73cf0d5dec3af4c1f995fbfa37305cc73d2b816",
	        "Created": "2025-11-24T02:24:35.441680908Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 351085,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T02:24:35.470586249Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/2bbbb8de094b319ef64e68dbe73cf0d5dec3af4c1f995fbfa37305cc73d2b816/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2bbbb8de094b319ef64e68dbe73cf0d5dec3af4c1f995fbfa37305cc73d2b816/hostname",
	        "HostsPath": "/var/lib/docker/containers/2bbbb8de094b319ef64e68dbe73cf0d5dec3af4c1f995fbfa37305cc73d2b816/hosts",
	        "LogPath": "/var/lib/docker/containers/2bbbb8de094b319ef64e68dbe73cf0d5dec3af4c1f995fbfa37305cc73d2b816/2bbbb8de094b319ef64e68dbe73cf0d5dec3af4c1f995fbfa37305cc73d2b816-json.log",
	        "Name": "/addons-831846",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-831846:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-831846",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2bbbb8de094b319ef64e68dbe73cf0d5dec3af4c1f995fbfa37305cc73d2b816",
	                "LowerDir": "/var/lib/docker/overlay2/d83c5f0438e590d391189509de54f1d798e2e18ff41b633bb43cbbad798581f0-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d83c5f0438e590d391189509de54f1d798e2e18ff41b633bb43cbbad798581f0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d83c5f0438e590d391189509de54f1d798e2e18ff41b633bb43cbbad798581f0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d83c5f0438e590d391189509de54f1d798e2e18ff41b633bb43cbbad798581f0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-831846",
	                "Source": "/var/lib/docker/volumes/addons-831846/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-831846",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-831846",
	                "name.minikube.sigs.k8s.io": "addons-831846",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0d53dbb891f5d477aab1e91e68d700cb2edce62f5f6860fb4e3e5b9d6f0dae7e",
	            "SandboxKey": "/var/run/docker/netns/0d53dbb891f5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-831846": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dc74d17b046a1b9232c65675579012ae6622be9ecbf5d337b28a0d3bb7d576bf",
	                    "EndpointID": "be205d1bef46584e2dcae24a84f245d522f5104e6b54999e1d0f023b2b15ffcc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "42:bb:9c:df:a5:fd",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-831846",
	                        "2bbbb8de094b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-831846 -n addons-831846
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-831846 logs -n 25: (1.086368455s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-907926 --alsologtostderr --binary-mirror http://127.0.0.1:38615 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-907926 │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │                     │
	│ delete  │ -p binary-mirror-907926                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-907926 │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:24 UTC │
	│ addons  │ enable dashboard -p addons-831846                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │                     │
	│ addons  │ disable dashboard -p addons-831846                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │                     │
	│ start   │ -p addons-831846 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:26 UTC │
	│ addons  │ addons-831846 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ addons  │ addons-831846 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ addons  │ enable headlamp -p addons-831846 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ addons  │ addons-831846 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ addons  │ addons-831846 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ addons  │ addons-831846 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ addons  │ addons-831846 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ ip      │ addons-831846 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │ 24 Nov 25 02:26 UTC │
	│ addons  │ addons-831846 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ addons  │ addons-831846 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ addons  │ addons-831846 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ ssh     │ addons-831846 ssh cat /opt/local-path-provisioner/pvc-f6a25546-e26a-4b70-882a-bd7b6a6cd688_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │ 24 Nov 25 02:26 UTC │
	│ addons  │ addons-831846 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-831846                                                                                                                                                                                                                                                                                                                                                                                           │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │ 24 Nov 25 02:26 UTC │
	│ addons  │ addons-831846 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ addons  │ addons-831846 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ ssh     │ addons-831846 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ addons  │ addons-831846 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:27 UTC │                     │
	│ addons  │ addons-831846 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:27 UTC │                     │
	│ ip      │ addons-831846 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-831846        │ jenkins │ v1.37.0 │ 24 Nov 25 02:29 UTC │ 24 Nov 25 02:29 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:24:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:24:12.497235  350425 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:24:12.497486  350425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:24:12.497494  350425 out.go:374] Setting ErrFile to fd 2...
	I1124 02:24:12.497498  350425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:24:12.497728  350425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:24:12.498234  350425 out.go:368] Setting JSON to false
	I1124 02:24:12.499079  350425 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3999,"bootTime":1763947053,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:24:12.499129  350425 start.go:143] virtualization: kvm guest
	I1124 02:24:12.500474  350425 out.go:179] * [addons-831846] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:24:12.501446  350425 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:24:12.501444  350425 notify.go:221] Checking for updates...
	I1124 02:24:12.502502  350425 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:24:12.503904  350425 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 02:24:12.504959  350425 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 02:24:12.505861  350425 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:24:12.506741  350425 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:24:12.507752  350425 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:24:12.530151  350425 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:24:12.530306  350425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:24:12.586274  350425 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 02:24:12.576722653 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:24:12.586379  350425 docker.go:319] overlay module found
	I1124 02:24:12.588276  350425 out.go:179] * Using the docker driver based on user configuration
	I1124 02:24:12.589187  350425 start.go:309] selected driver: docker
	I1124 02:24:12.589199  350425 start.go:927] validating driver "docker" against <nil>
	I1124 02:24:12.589210  350425 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:24:12.589744  350425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:24:12.641105  350425 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 02:24:12.632279885 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:24:12.641281  350425 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 02:24:12.641496  350425 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 02:24:12.642800  350425 out.go:179] * Using Docker driver with root privileges
	I1124 02:24:12.643671  350425 cni.go:84] Creating CNI manager for ""
	I1124 02:24:12.643739  350425 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 02:24:12.643750  350425 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 02:24:12.643809  350425 start.go:353] cluster config:
	{Name:addons-831846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-831846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1124 02:24:12.644981  350425 out.go:179] * Starting "addons-831846" primary control-plane node in "addons-831846" cluster
	I1124 02:24:12.645847  350425 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 02:24:12.646844  350425 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 02:24:12.647709  350425 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 02:24:12.647737  350425 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 02:24:12.647745  350425 cache.go:65] Caching tarball of preloaded images
	I1124 02:24:12.647805  350425 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 02:24:12.647826  350425 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 02:24:12.647834  350425 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 02:24:12.648151  350425 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/config.json ...
	I1124 02:24:12.648175  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/config.json: {Name:mk6c046471a659c96204a53c6d5135384c43c9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:12.663382  350425 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 to local cache
	I1124 02:24:12.663508  350425 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory
	I1124 02:24:12.663527  350425 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory, skipping pull
	I1124 02:24:12.663532  350425 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in cache, skipping pull
	I1124 02:24:12.663543  350425 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 as a tarball
	I1124 02:24:12.663553  350425 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 from local cache
	I1124 02:24:24.353432  350425 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 from cached tarball
	I1124 02:24:24.353477  350425 cache.go:243] Successfully downloaded all kic artifacts
	I1124 02:24:24.353536  350425 start.go:360] acquireMachinesLock for addons-831846: {Name:mk78cdbea9ce09db40f77c1e12049c59393ec2d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 02:24:24.353639  350425 start.go:364] duration metric: took 80.002µs to acquireMachinesLock for "addons-831846"
	I1124 02:24:24.353672  350425 start.go:93] Provisioning new machine with config: &{Name:addons-831846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-831846 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 02:24:24.353793  350425 start.go:125] createHost starting for "" (driver="docker")
	I1124 02:24:24.355301  350425 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1124 02:24:24.355609  350425 start.go:159] libmachine.API.Create for "addons-831846" (driver="docker")
	I1124 02:24:24.355651  350425 client.go:173] LocalClient.Create starting
	I1124 02:24:24.355778  350425 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 02:24:24.470843  350425 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 02:24:24.546856  350425 cli_runner.go:164] Run: docker network inspect addons-831846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 02:24:24.563514  350425 cli_runner.go:211] docker network inspect addons-831846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 02:24:24.563586  350425 network_create.go:284] running [docker network inspect addons-831846] to gather additional debugging logs...
	I1124 02:24:24.563608  350425 cli_runner.go:164] Run: docker network inspect addons-831846
	W1124 02:24:24.578139  350425 cli_runner.go:211] docker network inspect addons-831846 returned with exit code 1
	I1124 02:24:24.578160  350425 network_create.go:287] error running [docker network inspect addons-831846]: docker network inspect addons-831846: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-831846 not found
	I1124 02:24:24.578171  350425 network_create.go:289] output of [docker network inspect addons-831846]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-831846 not found
	
	** /stderr **
	I1124 02:24:24.578290  350425 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 02:24:24.593443  350425 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c8d000}
	I1124 02:24:24.593476  350425 network_create.go:124] attempt to create docker network addons-831846 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1124 02:24:24.593528  350425 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-831846 addons-831846
	I1124 02:24:24.635600  350425 network_create.go:108] docker network addons-831846 192.168.49.0/24 created
	I1124 02:24:24.635638  350425 kic.go:121] calculated static IP "192.168.49.2" for the "addons-831846" container
	I1124 02:24:24.635708  350425 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 02:24:24.652151  350425 cli_runner.go:164] Run: docker volume create addons-831846 --label name.minikube.sigs.k8s.io=addons-831846 --label created_by.minikube.sigs.k8s.io=true
	I1124 02:24:24.668288  350425 oci.go:103] Successfully created a docker volume addons-831846
	I1124 02:24:24.668349  350425 cli_runner.go:164] Run: docker run --rm --name addons-831846-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-831846 --entrypoint /usr/bin/test -v addons-831846:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 02:24:31.095422  350425 cli_runner.go:217] Completed: docker run --rm --name addons-831846-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-831846 --entrypoint /usr/bin/test -v addons-831846:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib: (6.427024618s)
	I1124 02:24:31.095464  350425 oci.go:107] Successfully prepared a docker volume addons-831846
	I1124 02:24:31.095529  350425 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 02:24:31.095547  350425 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 02:24:31.095617  350425 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-831846:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 02:24:35.371995  350425 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-831846:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (4.276327463s)
	I1124 02:24:35.372033  350425 kic.go:203] duration metric: took 4.276481705s to extract preloaded images to volume ...
	W1124 02:24:35.372120  350425 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 02:24:35.372164  350425 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 02:24:35.372207  350425 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 02:24:35.426685  350425 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-831846 --name addons-831846 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-831846 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-831846 --network addons-831846 --ip 192.168.49.2 --volume addons-831846:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 02:24:35.694411  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Running}}
	I1124 02:24:35.712736  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:35.729204  350425 cli_runner.go:164] Run: docker exec addons-831846 stat /var/lib/dpkg/alternatives/iptables
	I1124 02:24:35.781706  350425 oci.go:144] the created container "addons-831846" has a running status.
	I1124 02:24:35.781734  350425 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa...
	I1124 02:24:35.849039  350425 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 02:24:35.871269  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:35.887046  350425 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 02:24:35.887065  350425 kic_runner.go:114] Args: [docker exec --privileged addons-831846 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 02:24:35.926090  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:35.946965  350425 machine.go:94] provisionDockerMachine start ...
	I1124 02:24:35.947085  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:35.964671  350425 main.go:143] libmachine: Using SSH client type: native
	I1124 02:24:35.965013  350425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 02:24:35.965029  350425 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 02:24:35.965628  350425 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55204->127.0.0.1:33138: read: connection reset by peer
	I1124 02:24:39.102239  350425 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-831846
	
	I1124 02:24:39.102299  350425 ubuntu.go:182] provisioning hostname "addons-831846"
	I1124 02:24:39.102380  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:39.119443  350425 main.go:143] libmachine: Using SSH client type: native
	I1124 02:24:39.119641  350425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 02:24:39.119653  350425 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-831846 && echo "addons-831846" | sudo tee /etc/hostname
	I1124 02:24:39.262156  350425 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-831846
	
	I1124 02:24:39.262231  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:39.279171  350425 main.go:143] libmachine: Using SSH client type: native
	I1124 02:24:39.279383  350425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 02:24:39.279400  350425 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-831846' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-831846/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-831846' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 02:24:39.413737  350425 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 02:24:39.413765  350425 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 02:24:39.413783  350425 ubuntu.go:190] setting up certificates
	I1124 02:24:39.413805  350425 provision.go:84] configureAuth start
	I1124 02:24:39.413867  350425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-831846
	I1124 02:24:39.431315  350425 provision.go:143] copyHostCerts
	I1124 02:24:39.431383  350425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 02:24:39.431508  350425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 02:24:39.431597  350425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 02:24:39.431662  350425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.addons-831846 san=[127.0.0.1 192.168.49.2 addons-831846 localhost minikube]
	I1124 02:24:39.486632  350425 provision.go:177] copyRemoteCerts
	I1124 02:24:39.486696  350425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 02:24:39.486744  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:39.503145  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:39.599217  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 02:24:39.617365  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 02:24:39.633463  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1124 02:24:39.649273  350425 provision.go:87] duration metric: took 235.454381ms to configureAuth
	I1124 02:24:39.649292  350425 ubuntu.go:206] setting minikube options for container-runtime
	I1124 02:24:39.649474  350425 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:24:39.649575  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:39.665826  350425 main.go:143] libmachine: Using SSH client type: native
	I1124 02:24:39.666058  350425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 02:24:39.666075  350425 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 02:24:39.933049  350425 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 02:24:39.933073  350425 machine.go:97] duration metric: took 3.986086694s to provisionDockerMachine
	I1124 02:24:39.933084  350425 client.go:176] duration metric: took 15.577422313s to LocalClient.Create
	I1124 02:24:39.933103  350425 start.go:167] duration metric: took 15.577495237s to libmachine.API.Create "addons-831846"
	I1124 02:24:39.933112  350425 start.go:293] postStartSetup for "addons-831846" (driver="docker")
	I1124 02:24:39.933125  350425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 02:24:39.933184  350425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 02:24:39.933221  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:39.949735  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:40.047426  350425 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 02:24:40.050684  350425 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 02:24:40.050708  350425 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 02:24:40.050718  350425 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 02:24:40.050766  350425 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 02:24:40.050788  350425 start.go:296] duration metric: took 117.668906ms for postStartSetup
	I1124 02:24:40.051062  350425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-831846
	I1124 02:24:40.069014  350425 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/config.json ...
	I1124 02:24:40.069320  350425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 02:24:40.069369  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:40.085248  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:40.178140  350425 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 02:24:40.182372  350425 start.go:128] duration metric: took 15.82856204s to createHost
	I1124 02:24:40.182395  350425 start.go:83] releasing machines lock for "addons-831846", held for 15.828739912s
	I1124 02:24:40.182447  350425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-831846
	I1124 02:24:40.198690  350425 ssh_runner.go:195] Run: cat /version.json
	I1124 02:24:40.198739  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:40.198771  350425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 02:24:40.198853  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:40.216589  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:40.218201  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:40.360665  350425 ssh_runner.go:195] Run: systemctl --version
	I1124 02:24:40.366635  350425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 02:24:40.398781  350425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 02:24:40.403067  350425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 02:24:40.403125  350425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 02:24:40.427036  350425 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 02:24:40.427059  350425 start.go:496] detecting cgroup driver to use...
	I1124 02:24:40.427094  350425 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 02:24:40.427144  350425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 02:24:40.441733  350425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 02:24:40.452743  350425 docker.go:218] disabling cri-docker service (if available) ...
	I1124 02:24:40.452795  350425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 02:24:40.467458  350425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 02:24:40.482930  350425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 02:24:40.561086  350425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 02:24:40.644408  350425 docker.go:234] disabling docker service ...
	I1124 02:24:40.644481  350425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 02:24:40.661467  350425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 02:24:40.672880  350425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 02:24:40.750724  350425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 02:24:40.828145  350425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 02:24:40.839099  350425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 02:24:40.852076  350425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 02:24:40.852135  350425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:24:40.861539  350425 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 02:24:40.861586  350425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:24:40.869472  350425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:24:40.877366  350425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:24:40.885083  350425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 02:24:40.892471  350425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:24:40.900317  350425 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:24:40.912508  350425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:24:40.920342  350425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 02:24:40.927074  350425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 02:24:40.933617  350425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 02:24:41.007380  350425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 02:24:41.133750  350425 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 02:24:41.133821  350425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 02:24:41.137861  350425 start.go:564] Will wait 60s for crictl version
	I1124 02:24:41.137946  350425 ssh_runner.go:195] Run: which crictl
	I1124 02:24:41.141343  350425 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 02:24:41.165003  350425 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 02:24:41.165089  350425 ssh_runner.go:195] Run: crio --version
	I1124 02:24:41.192796  350425 ssh_runner.go:195] Run: crio --version
	I1124 02:24:41.220832  350425 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 02:24:41.221820  350425 cli_runner.go:164] Run: docker network inspect addons-831846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 02:24:41.238488  350425 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 02:24:41.242320  350425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 02:24:41.251830  350425 kubeadm.go:884] updating cluster {Name:addons-831846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-831846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 02:24:41.251965  350425 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 02:24:41.252025  350425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 02:24:41.281380  350425 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 02:24:41.281406  350425 crio.go:433] Images already preloaded, skipping extraction
	I1124 02:24:41.281452  350425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 02:24:41.304835  350425 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 02:24:41.304856  350425 cache_images.go:86] Images are preloaded, skipping loading
	I1124 02:24:41.304865  350425 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1124 02:24:41.304973  350425 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-831846 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-831846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 02:24:41.305056  350425 ssh_runner.go:195] Run: crio config
	I1124 02:24:41.347938  350425 cni.go:84] Creating CNI manager for ""
	I1124 02:24:41.347963  350425 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 02:24:41.347984  350425 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 02:24:41.348018  350425 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-831846 NodeName:addons-831846 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 02:24:41.348188  350425 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-831846"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 02:24:41.348263  350425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 02:24:41.355941  350425 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 02:24:41.355999  350425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 02:24:41.363225  350425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1124 02:24:41.375076  350425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 02:24:41.388663  350425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1124 02:24:41.400010  350425 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 02:24:41.403229  350425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 02:24:41.412233  350425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 02:24:41.487680  350425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 02:24:41.510732  350425 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846 for IP: 192.168.49.2
	I1124 02:24:41.510758  350425 certs.go:195] generating shared ca certs ...
	I1124 02:24:41.510779  350425 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.510926  350425 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 02:24:41.589454  350425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt ...
	I1124 02:24:41.589483  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt: {Name:mk41cc2f0def56fbfb754b3a8750ee8828de6e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.589643  350425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key ...
	I1124 02:24:41.589659  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key: {Name:mked50c87c7e2fff49a6fd4196dbd325894e67f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.589762  350425 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 02:24:41.619716  350425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt ...
	I1124 02:24:41.619736  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt: {Name:mka5b8c2d9a65ddc1272b7582ce7c34dbde1e911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.619856  350425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key ...
	I1124 02:24:41.619871  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key: {Name:mke60c8f9462f69b2c9cb21c9bff7faff5a9d7e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.619986  350425 certs.go:257] generating profile certs ...
	I1124 02:24:41.620060  350425 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.key
	I1124 02:24:41.620078  350425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt with IP's: []
	I1124 02:24:41.724858  350425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt ...
	I1124 02:24:41.724878  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: {Name:mk1db91edf22fe94153383e289f0e273481d0368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.725011  350425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.key ...
	I1124 02:24:41.725026  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.key: {Name:mk2459d5b0249dfebdb293d9657d24b961375413 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.725118  350425 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.key.510819de
	I1124 02:24:41.725141  350425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.crt.510819de with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1124 02:24:41.919121  350425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.crt.510819de ...
	I1124 02:24:41.919145  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.crt.510819de: {Name:mkc052818f6b009e7c1c266c9c3b79e5cc6d11b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.919399  350425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.key.510819de ...
	I1124 02:24:41.919429  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.key.510819de: {Name:mke211e616cbbf16fcfc66ec0a61e00c5f5953ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.919581  350425 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.crt.510819de -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.crt
	I1124 02:24:41.919705  350425 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.key.510819de -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.key
	I1124 02:24:41.919786  350425 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.key
	I1124 02:24:41.919815  350425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.crt with IP's: []
	I1124 02:24:42.020937  350425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.crt ...
	I1124 02:24:42.020964  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.crt: {Name:mk7dcbbc8a7679f60577e37ec6a554aa27393353 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:42.021119  350425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.key ...
	I1124 02:24:42.021138  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.key: {Name:mk71e00b6ab7c8a71cce2cb67ede8d73f20238d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:42.021341  350425 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 02:24:42.021396  350425 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 02:24:42.021435  350425 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 02:24:42.021471  350425 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 02:24:42.022104  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 02:24:42.039730  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 02:24:42.056003  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 02:24:42.072093  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 02:24:42.087990  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 02:24:42.103927  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 02:24:42.119578  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 02:24:42.135144  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 02:24:42.150800  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 02:24:42.168298  350425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 02:24:42.179552  350425 ssh_runner.go:195] Run: openssl version
	I1124 02:24:42.185163  350425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 02:24:42.194858  350425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 02:24:42.198218  350425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 02:24:42.198259  350425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 02:24:42.231361  350425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 02:24:42.238932  350425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 02:24:42.242193  350425 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 02:24:42.242231  350425 kubeadm.go:401] StartCluster: {Name:addons-831846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-831846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:24:42.242301  350425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:24:42.242338  350425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:24:42.267756  350425 cri.go:89] found id: ""
	I1124 02:24:42.267806  350425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 02:24:42.274856  350425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 02:24:42.281905  350425 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 02:24:42.281973  350425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 02:24:42.288768  350425 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 02:24:42.288783  350425 kubeadm.go:158] found existing configuration files:
	
	I1124 02:24:42.288811  350425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 02:24:42.295927  350425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 02:24:42.295965  350425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 02:24:42.303010  350425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 02:24:42.310537  350425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 02:24:42.310597  350425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 02:24:42.318174  350425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 02:24:42.325492  350425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 02:24:42.325538  350425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 02:24:42.332744  350425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 02:24:42.340343  350425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 02:24:42.340395  350425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 02:24:42.347608  350425 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 02:24:42.403120  350425 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 02:24:42.457353  350425 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 02:24:52.179997  350425 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 02:24:52.180082  350425 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 02:24:52.180224  350425 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 02:24:52.180294  350425 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 02:24:52.180337  350425 kubeadm.go:319] OS: Linux
	I1124 02:24:52.180403  350425 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 02:24:52.180475  350425 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 02:24:52.180562  350425 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 02:24:52.180635  350425 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 02:24:52.180719  350425 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 02:24:52.180795  350425 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 02:24:52.180871  350425 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 02:24:52.180942  350425 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 02:24:52.181039  350425 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 02:24:52.181196  350425 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 02:24:52.181317  350425 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 02:24:52.181412  350425 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 02:24:52.183320  350425 out.go:252]   - Generating certificates and keys ...
	I1124 02:24:52.183398  350425 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 02:24:52.183489  350425 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 02:24:52.183580  350425 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 02:24:52.183666  350425 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 02:24:52.183748  350425 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 02:24:52.183817  350425 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 02:24:52.183915  350425 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 02:24:52.184052  350425 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-831846 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 02:24:52.184116  350425 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 02:24:52.184254  350425 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-831846 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 02:24:52.184357  350425 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 02:24:52.184444  350425 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 02:24:52.184506  350425 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 02:24:52.184595  350425 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 02:24:52.184686  350425 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 02:24:52.184767  350425 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 02:24:52.184840  350425 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 02:24:52.184953  350425 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 02:24:52.185031  350425 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 02:24:52.185132  350425 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 02:24:52.185240  350425 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 02:24:52.186316  350425 out.go:252]   - Booting up control plane ...
	I1124 02:24:52.186410  350425 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 02:24:52.186491  350425 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 02:24:52.186573  350425 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 02:24:52.186684  350425 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 02:24:52.186818  350425 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 02:24:52.186942  350425 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 02:24:52.187052  350425 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 02:24:52.187113  350425 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 02:24:52.187267  350425 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 02:24:52.187410  350425 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 02:24:52.187504  350425 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001402195s
	I1124 02:24:52.187615  350425 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 02:24:52.187720  350425 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1124 02:24:52.187830  350425 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 02:24:52.187939  350425 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 02:24:52.188042  350425 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.12413053s
	I1124 02:24:52.188140  350425 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.958611031s
	I1124 02:24:52.188202  350425 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501124038s
	I1124 02:24:52.188297  350425 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 02:24:52.188406  350425 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 02:24:52.188456  350425 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 02:24:52.188640  350425 kubeadm.go:319] [mark-control-plane] Marking the node addons-831846 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 02:24:52.188690  350425 kubeadm.go:319] [bootstrap-token] Using token: ddy8ur.z7eb0digsuktkhl7
	I1124 02:24:52.189977  350425 out.go:252]   - Configuring RBAC rules ...
	I1124 02:24:52.190085  350425 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 02:24:52.190160  350425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 02:24:52.190287  350425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 02:24:52.190401  350425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 02:24:52.190526  350425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 02:24:52.190635  350425 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 02:24:52.190762  350425 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 02:24:52.190823  350425 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 02:24:52.190908  350425 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 02:24:52.190921  350425 kubeadm.go:319] 
	I1124 02:24:52.191013  350425 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 02:24:52.191021  350425 kubeadm.go:319] 
	I1124 02:24:52.191115  350425 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 02:24:52.191123  350425 kubeadm.go:319] 
	I1124 02:24:52.191178  350425 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 02:24:52.191260  350425 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 02:24:52.191333  350425 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 02:24:52.191341  350425 kubeadm.go:319] 
	I1124 02:24:52.191412  350425 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 02:24:52.191420  350425 kubeadm.go:319] 
	I1124 02:24:52.191488  350425 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 02:24:52.191496  350425 kubeadm.go:319] 
	I1124 02:24:52.191559  350425 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 02:24:52.191622  350425 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 02:24:52.191685  350425 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 02:24:52.191691  350425 kubeadm.go:319] 
	I1124 02:24:52.191774  350425 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 02:24:52.191875  350425 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 02:24:52.191897  350425 kubeadm.go:319] 
	I1124 02:24:52.191999  350425 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ddy8ur.z7eb0digsuktkhl7 \
	I1124 02:24:52.192119  350425 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 02:24:52.192149  350425 kubeadm.go:319] 	--control-plane 
	I1124 02:24:52.192163  350425 kubeadm.go:319] 
	I1124 02:24:52.192267  350425 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 02:24:52.192275  350425 kubeadm.go:319] 
	I1124 02:24:52.192389  350425 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ddy8ur.z7eb0digsuktkhl7 \
	I1124 02:24:52.192539  350425 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 02:24:52.192567  350425 cni.go:84] Creating CNI manager for ""
	I1124 02:24:52.192577  350425 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 02:24:52.194307  350425 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 02:24:52.195231  350425 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 02:24:52.199378  350425 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 02:24:52.199393  350425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 02:24:52.211671  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 02:24:52.399360  350425 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 02:24:52.399453  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:52.399469  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-831846 minikube.k8s.io/updated_at=2025_11_24T02_24_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=addons-831846 minikube.k8s.io/primary=true
	I1124 02:24:52.408673  350425 ops.go:34] apiserver oom_adj: -16
	I1124 02:24:52.469572  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:52.970554  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:53.470026  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:53.969909  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:54.469641  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:54.970899  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:55.470462  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:55.969601  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:56.470546  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:56.970238  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:57.469604  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:57.531712  350425 kubeadm.go:1114] duration metric: took 5.132320273s to wait for elevateKubeSystemPrivileges
	I1124 02:24:57.531757  350425 kubeadm.go:403] duration metric: took 15.289526796s to StartCluster
	I1124 02:24:57.531782  350425 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:57.531943  350425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 02:24:57.532482  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:57.532721  350425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 02:24:57.532768  350425 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 02:24:57.532861  350425 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 02:24:57.533022  350425 addons.go:70] Setting yakd=true in profile "addons-831846"
	I1124 02:24:57.533034  350425 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-831846"
	I1124 02:24:57.533046  350425 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:24:57.533059  350425 addons.go:70] Setting registry-creds=true in profile "addons-831846"
	I1124 02:24:57.533066  350425 addons.go:70] Setting cloud-spanner=true in profile "addons-831846"
	I1124 02:24:57.533071  350425 addons.go:239] Setting addon registry-creds=true in "addons-831846"
	I1124 02:24:57.533068  350425 addons.go:70] Setting registry=true in profile "addons-831846"
	I1124 02:24:57.533083  350425 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-831846"
	I1124 02:24:57.533051  350425 addons.go:239] Setting addon yakd=true in "addons-831846"
	I1124 02:24:57.533099  350425 addons.go:70] Setting metrics-server=true in profile "addons-831846"
	I1124 02:24:57.533106  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533117  350425 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-831846"
	I1124 02:24:57.533120  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533121  350425 addons.go:70] Setting ingress-dns=true in profile "addons-831846"
	I1124 02:24:57.533129  350425 addons.go:70] Setting default-storageclass=true in profile "addons-831846"
	I1124 02:24:57.533136  350425 addons.go:70] Setting volcano=true in profile "addons-831846"
	I1124 02:24:57.533144  350425 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-831846"
	I1124 02:24:57.533147  350425 addons.go:239] Setting addon ingress-dns=true in "addons-831846"
	I1124 02:24:57.533151  350425 addons.go:239] Setting addon volcano=true in "addons-831846"
	I1124 02:24:57.533172  350425 addons.go:70] Setting gcp-auth=true in profile "addons-831846"
	I1124 02:24:57.533196  350425 mustload.go:66] Loading cluster: addons-831846
	I1124 02:24:57.533197  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533231  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533221  350425 addons.go:70] Setting storage-provisioner=true in profile "addons-831846"
	I1124 02:24:57.533257  350425 addons.go:239] Setting addon storage-provisioner=true in "addons-831846"
	I1124 02:24:57.533287  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533389  350425 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:24:57.533503  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533624  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533712  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533744  350425 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-831846"
	I1124 02:24:57.533780  350425 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-831846"
	I1124 02:24:57.533799  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533078  350425 addons.go:239] Setting addon cloud-spanner=true in "addons-831846"
	I1124 02:24:57.533951  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.534084  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.534406  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533121  350425 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-831846"
	I1124 02:24:57.534601  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.535057  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533061  350425 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-831846"
	I1124 02:24:57.535286  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.535780  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533025  350425 addons.go:70] Setting ingress=true in profile "addons-831846"
	I1124 02:24:57.535924  350425 addons.go:239] Setting addon ingress=true in "addons-831846"
	I1124 02:24:57.536013  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.536478  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.537619  350425 out.go:179] * Verifying Kubernetes components...
	I1124 02:24:57.533092  350425 addons.go:70] Setting inspektor-gadget=true in profile "addons-831846"
	I1124 02:24:57.537937  350425 addons.go:239] Setting addon inspektor-gadget=true in "addons-831846"
	I1124 02:24:57.537980  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.538436  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533716  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533727  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533109  350425 addons.go:239] Setting addon metrics-server=true in "addons-831846"
	I1124 02:24:57.539117  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533736  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.541403  350425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 02:24:57.533090  350425 addons.go:239] Setting addon registry=true in "addons-831846"
	I1124 02:24:57.541832  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533126  350425 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-831846"
	I1124 02:24:57.542098  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533732  350425 addons.go:70] Setting volumesnapshots=true in profile "addons-831846"
	I1124 02:24:57.542989  350425 addons.go:239] Setting addon volumesnapshots=true in "addons-831846"
	I1124 02:24:57.543290  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.545543  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.547727  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.548471  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.549625  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.572660  350425 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-831846"
	I1124 02:24:57.572763  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.573289  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.588466  350425 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1124 02:24:57.589800  350425 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 02:24:57.589820  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 02:24:57.589878  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.590467  350425 addons.go:239] Setting addon default-storageclass=true in "addons-831846"
	I1124 02:24:57.590540  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.592502  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.595366  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	W1124 02:24:57.611091  350425 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 02:24:57.615421  350425 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 02:24:57.615559  350425 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 02:24:57.615632  350425 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 02:24:57.621396  350425 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 02:24:57.621418  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 02:24:57.621479  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.626512  350425 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 02:24:57.626555  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 02:24:57.626620  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.626709  350425 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 02:24:57.626755  350425 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 02:24:57.626841  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.628754  350425 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1124 02:24:57.629784  350425 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1124 02:24:57.630284  350425 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 02:24:57.630302  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 02:24:57.630370  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.630954  350425 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 02:24:57.631105  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 02:24:57.631201  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 02:24:57.631265  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.634706  350425 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 02:24:57.634707  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 02:24:57.635087  350425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 02:24:57.635107  350425 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 02:24:57.635156  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.636017  350425 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 02:24:57.636038  350425 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 02:24:57.636112  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.636647  350425 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 02:24:57.637044  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 02:24:57.637614  350425 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 02:24:57.637635  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 02:24:57.637692  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.643349  350425 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 02:24:57.643402  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 02:24:57.645354  350425 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 02:24:57.646386  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 02:24:57.646547  350425 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 02:24:57.646561  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 02:24:57.646636  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.650757  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 02:24:57.650956  350425 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 02:24:57.652265  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 02:24:57.652304  350425 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 02:24:57.652316  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 02:24:57.652369  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.654565  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 02:24:57.656242  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 02:24:57.657250  350425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 02:24:57.657360  350425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 02:24:57.657479  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.660079  350425 out.go:179]   - Using image docker.io/busybox:stable
	I1124 02:24:57.661110  350425 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 02:24:57.662129  350425 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 02:24:57.663303  350425 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 02:24:57.663308  350425 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 02:24:57.663580  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 02:24:57.663938  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.668235  350425 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 02:24:57.668254  350425 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 02:24:57.668304  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.671548  350425 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 02:24:57.674405  350425 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 02:24:57.674425  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 02:24:57.674477  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.675125  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.677844  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.681856  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.693031  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.702034  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.710329  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.710739  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.712185  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.717918  350425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 02:24:57.733860  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.736166  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.742045  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.742552  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.743510  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.745246  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.745785  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.779079  350425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 02:24:57.845355  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 02:24:57.853586  350425 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 02:24:57.853608  350425 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 02:24:57.871911  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 02:24:57.874735  350425 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 02:24:57.874761  350425 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 02:24:57.884312  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 02:24:57.887655  350425 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 02:24:57.887677  350425 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 02:24:57.889707  350425 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 02:24:57.889725  350425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 02:24:57.901828  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 02:24:57.902991  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 02:24:57.911196  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 02:24:57.911307  350425 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 02:24:57.911320  350425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 02:24:57.917426  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 02:24:57.918457  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 02:24:57.923204  350425 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 02:24:57.923225  350425 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 02:24:57.923473  350425 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 02:24:57.923491  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 02:24:57.924359  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 02:24:57.929522  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 02:24:57.934981  350425 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 02:24:57.935003  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 02:24:57.939063  350425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 02:24:57.939083  350425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 02:24:57.954178  350425 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 02:24:57.954202  350425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 02:24:57.975801  350425 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 02:24:57.975827  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 02:24:57.979122  350425 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 02:24:57.979144  350425 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 02:24:57.992600  350425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 02:24:57.992622  350425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 02:24:58.009061  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 02:24:58.023069  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 02:24:58.031315  350425 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 02:24:58.031387  350425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 02:24:58.042343  350425 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 02:24:58.042365  350425 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 02:24:58.062315  350425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 02:24:58.062411  350425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 02:24:58.095786  350425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 02:24:58.095812  350425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 02:24:58.102318  350425 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 02:24:58.102403  350425 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 02:24:58.113156  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 02:24:58.117111  350425 node_ready.go:35] waiting up to 6m0s for node "addons-831846" to be "Ready" ...
	I1124 02:24:58.117389  350425 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1124 02:24:58.130758  350425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 02:24:58.130778  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 02:24:58.163656  350425 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 02:24:58.163687  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 02:24:58.197458  350425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 02:24:58.197490  350425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 02:24:58.249202  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 02:24:58.275562  350425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 02:24:58.275599  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 02:24:58.318459  350425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 02:24:58.318487  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 02:24:58.356736  350425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 02:24:58.356767  350425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 02:24:58.401410  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 02:24:58.626429  350425 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-831846" context rescaled to 1 replicas
	I1124 02:24:59.062241  350425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.132681733s)
	I1124 02:24:59.062289  350425 addons.go:495] Verifying addon ingress=true in "addons-831846"
	I1124 02:24:59.062287  350425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.053188039s)
	I1124 02:24:59.062320  350425 addons.go:495] Verifying addon registry=true in "addons-831846"
	I1124 02:24:59.062388  350425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.039175109s)
	I1124 02:24:59.062470  350425 addons.go:495] Verifying addon metrics-server=true in "addons-831846"
	I1124 02:24:59.064731  350425 out.go:179] * Verifying ingress addon...
	I1124 02:24:59.064757  350425 out.go:179] * Verifying registry addon...
	I1124 02:24:59.064801  350425 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-831846 service yakd-dashboard -n yakd-dashboard
	
	I1124 02:24:59.067295  350425 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 02:24:59.067338  350425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 02:24:59.070308  350425 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 02:24:59.070324  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:24:59.070422  350425 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 02:24:59.070440  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:24:59.483619  350425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.234356382s)
	W1124 02:24:59.483732  350425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 02:24:59.483770  350425 retry.go:31] will retry after 253.880977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 02:24:59.483771  350425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.082299898s)
	I1124 02:24:59.483819  350425 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-831846"
	I1124 02:24:59.485738  350425 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 02:24:59.487691  350425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 02:24:59.489749  350425 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 02:24:59.489774  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:24:59.569974  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:24:59.570177  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:24:59.738517  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 02:24:59.990462  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:00.091088  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:00.091323  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:00.120043  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:00.490486  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:00.591473  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:00.591689  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:00.991274  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:01.091486  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:01.091601  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:01.490922  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:01.569811  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:01.570017  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:01.990427  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:02.069941  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:02.070190  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:02.194057  350425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.455487376s)
	I1124 02:25:02.490394  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:02.569607  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:02.569680  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:02.618911  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:02.990531  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:03.091197  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:03.091412  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:03.490997  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:03.570086  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:03.570249  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:03.990755  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:04.091728  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:04.091850  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:04.490963  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:04.569952  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:04.570091  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:04.619688  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:04.990352  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:05.090553  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:05.090624  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:05.204597  350425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 02:25:05.204678  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:25:05.222156  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:25:05.325356  350425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 02:25:05.337389  350425 addons.go:239] Setting addon gcp-auth=true in "addons-831846"
	I1124 02:25:05.337445  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:25:05.337822  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:25:05.355243  350425 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 02:25:05.355289  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:25:05.372195  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:25:05.466809  350425 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 02:25:05.467786  350425 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 02:25:05.468917  350425 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 02:25:05.468936  350425 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 02:25:05.480978  350425 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 02:25:05.481003  350425 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 02:25:05.490904  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:05.493589  350425 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 02:25:05.493605  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 02:25:05.505077  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 02:25:05.570568  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:05.570724  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:05.791538  350425 addons.go:495] Verifying addon gcp-auth=true in "addons-831846"
	I1124 02:25:05.792626  350425 out.go:179] * Verifying gcp-auth addon...
	I1124 02:25:05.794186  350425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 02:25:05.796313  350425 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 02:25:05.796326  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:05.990660  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:06.069721  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:06.069767  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:06.296465  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:06.491011  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:06.570027  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:06.570306  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:06.797443  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:06.990734  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:07.069849  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:07.070121  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:07.119720  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:07.296946  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:07.490082  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:07.570302  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:07.570456  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:07.796312  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:07.990857  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:08.069972  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:08.070172  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:08.297207  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:08.490700  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:08.569848  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:08.570087  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:08.797176  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:08.990926  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:09.070193  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:09.070269  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:09.120032  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:09.297102  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:09.490168  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:09.570100  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:09.570229  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:09.797334  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:09.990562  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:10.069480  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:10.069642  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:10.297585  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:10.490833  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:10.569840  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:10.570162  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:10.796988  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:10.990055  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:11.070294  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:11.070379  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:11.297241  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:11.490444  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:11.569598  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:11.569757  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:11.619516  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:11.796776  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:11.990176  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:12.070223  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:12.070434  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:12.297520  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:12.491133  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:12.570759  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:12.570790  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:12.797223  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:12.990675  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:13.070007  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:13.070190  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:13.297110  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:13.490380  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:13.569462  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:13.569715  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:13.796461  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:13.990675  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:14.069733  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:14.069743  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:14.119333  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:14.296503  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:14.490991  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:14.571002  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:14.571633  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:14.796867  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:14.990189  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:15.070241  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:15.070492  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:15.297221  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:15.490596  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:15.569495  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:15.569763  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:15.796171  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:15.990679  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:16.069772  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:16.069981  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:16.119664  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:16.296957  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:16.490391  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:16.570517  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:16.570673  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:16.796667  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:16.991271  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:17.070512  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:17.070611  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:17.296806  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:17.489871  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:17.569875  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:17.570129  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:17.797165  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:17.990535  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:18.069927  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:18.069980  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:18.119859  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:18.297338  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:18.490751  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:18.569611  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:18.569742  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:18.796288  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:18.990391  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:19.069581  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:19.069773  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:19.297228  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:19.490534  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:19.569783  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:19.570011  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:19.797399  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:19.990506  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:20.069592  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:20.069775  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:20.297005  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:20.490557  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:20.569765  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:20.569828  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:20.619553  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:20.796806  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:20.990009  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:21.070162  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:21.070421  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:21.297039  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:21.490217  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:21.570379  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:21.570511  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:21.796403  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:21.990861  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:22.070067  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:22.070307  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:22.297592  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:22.491216  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:22.570533  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:22.570689  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:22.619687  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:22.797458  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:22.990755  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:23.069903  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:23.070081  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:23.297074  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:23.490011  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:23.570014  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:23.570085  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:23.797074  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:23.990139  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:24.070157  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:24.070351  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:24.297249  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:24.490391  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:24.570357  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:24.570525  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:24.796410  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:24.990921  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:25.070579  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:25.070660  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:25.119458  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:25.296660  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:25.490987  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:25.569801  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:25.569990  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:25.796902  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:25.990127  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:26.070472  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:26.070591  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:26.300242  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:26.490473  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:26.569331  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:26.569494  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:26.796047  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:26.990397  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:27.069693  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:27.069862  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:27.119596  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:27.296977  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:27.490289  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:27.569533  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:27.569590  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:27.796728  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:27.990251  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:28.070501  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:28.070743  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:28.296899  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:28.490176  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:28.570134  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:28.570265  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:28.797159  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:28.990359  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:29.070605  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:29.070647  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:29.296729  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:29.489644  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:29.569615  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:29.569826  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:29.619507  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:29.796762  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:29.989968  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:30.070219  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:30.070434  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:30.296442  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:30.490734  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:30.569914  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:30.570058  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:30.797202  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:30.990350  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:31.070478  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:31.070540  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:31.296241  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:31.490243  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:31.570302  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:31.570326  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:31.797280  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:31.990455  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:32.069525  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:32.069710  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:32.119515  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:32.296984  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:32.490450  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:32.569692  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:32.569802  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:32.797341  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:32.990943  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:33.070534  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:33.070563  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:33.296766  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:33.489849  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:33.570075  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:33.570074  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:33.796665  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:33.990918  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:34.070092  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:34.070178  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:34.119950  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:34.297168  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:34.490409  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:34.570379  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:34.570441  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:34.796379  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:34.991237  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:35.070606  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:35.070763  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:35.296863  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:35.489951  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:35.570139  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:35.570144  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:35.796719  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:35.989857  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:36.070013  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:36.070128  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:36.120260  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:36.296591  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:36.489931  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:36.569946  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:36.570019  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:36.796902  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:36.990208  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:37.070546  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:37.070740  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:37.297030  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:37.490348  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:37.569540  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:37.569720  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:37.797090  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:37.990614  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:38.069794  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:38.070001  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:38.298187  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:38.491718  350425 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 02:25:38.491739  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:38.594337  350425 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 02:25:38.594366  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:38.594910  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:38.621508  350425 node_ready.go:49] node "addons-831846" is "Ready"
	I1124 02:25:38.621543  350425 node_ready.go:38] duration metric: took 40.504389312s for node "addons-831846" to be "Ready" ...
	I1124 02:25:38.621564  350425 api_server.go:52] waiting for apiserver process to appear ...
	I1124 02:25:38.621625  350425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 02:25:38.640077  350425 api_server.go:72] duration metric: took 41.107263444s to wait for apiserver process to appear ...
	I1124 02:25:38.640108  350425 api_server.go:88] waiting for apiserver healthz status ...
	I1124 02:25:38.640167  350425 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1124 02:25:38.645428  350425 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1124 02:25:38.646463  350425 api_server.go:141] control plane version: v1.34.1
	I1124 02:25:38.646491  350425 api_server.go:131] duration metric: took 6.374896ms to wait for apiserver health ...
	I1124 02:25:38.646505  350425 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 02:25:38.693488  350425 system_pods.go:59] 20 kube-system pods found
	I1124 02:25:38.693547  350425 system_pods.go:61] "amd-gpu-device-plugin-6f6fp" [f718fb18-20f0-4f35-931c-3a308c345e99] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 02:25:38.693561  350425 system_pods.go:61] "coredns-66bc5c9577-rdmxf" [89daa76d-6cb5-46b6-80c2-e6feea646c06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 02:25:38.693580  350425 system_pods.go:61] "csi-hostpath-attacher-0" [dfd1a64e-cd18-4f88-9ae8-933351bf5cff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 02:25:38.693589  350425 system_pods.go:61] "csi-hostpath-resizer-0" [afe0c71b-407a-426f-9323-d835b8f2e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 02:25:38.693599  350425 system_pods.go:61] "csi-hostpathplugin-lmkkf" [c8b973a0-b84c-4c2d-b283-00643dfccac7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 02:25:38.693606  350425 system_pods.go:61] "etcd-addons-831846" [bc9df316-a146-46c0-a791-a90e33b10de5] Running
	I1124 02:25:38.693610  350425 system_pods.go:61] "kindnet-8rv8j" [e51a1ca8-a9af-4c1f-bdaf-27b503467a22] Running
	I1124 02:25:38.693616  350425 system_pods.go:61] "kube-apiserver-addons-831846" [fb66712c-5e47-4256-906c-544eac5dbb55] Running
	I1124 02:25:38.693620  350425 system_pods.go:61] "kube-controller-manager-addons-831846" [dcc8ac48-94da-440d-aa7c-f5563637926a] Running
	I1124 02:25:38.693629  350425 system_pods.go:61] "kube-ingress-dns-minikube" [a8c07c0c-a689-4a36-98ed-b97c0d8c59e2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 02:25:38.693632  350425 system_pods.go:61] "kube-proxy-78b65" [78176f75-edaa-46ac-803c-1f08847b0345] Running
	I1124 02:25:38.693636  350425 system_pods.go:61] "kube-scheduler-addons-831846" [91ee0d85-a527-476c-bedd-b5ec3faa6ee8] Running
	I1124 02:25:38.693640  350425 system_pods.go:61] "metrics-server-85b7d694d7-jmkg5" [a7773bc3-3047-45ae-ac07-238fe7a6282f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 02:25:38.693650  350425 system_pods.go:61] "nvidia-device-plugin-daemonset-gf6tr" [59b3a9aa-53dd-4231-97bd-2015d666639c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 02:25:38.693661  350425 system_pods.go:61] "registry-6b586f9694-fmpk9" [f51b7a5d-73cd-404e-87db-f7b56c46e8fc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 02:25:38.693677  350425 system_pods.go:61] "registry-creds-764b6fb674-h45vm" [8eac9d40-11df-4d1b-b4ed-05e91b6db498] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 02:25:38.693688  350425 system_pods.go:61] "registry-proxy-qnxkh" [492c9528-7caf-47b5-86ce-62e1cf455391] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 02:25:38.693698  350425 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5mjkq" [77bb303b-f029-49fb-bc4a-12059b110a5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:38.693708  350425 system_pods.go:61] "snapshot-controller-7d9fbc56b8-hhf7t" [cffe1923-8bb2-43b0-a27d-15c013c3e481] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:38.693715  350425 system_pods.go:61] "storage-provisioner" [2bb8d9da-7fe4-4c71-b117-512944dd1208] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 02:25:38.693721  350425 system_pods.go:74] duration metric: took 47.210583ms to wait for pod list to return data ...
	I1124 02:25:38.693732  350425 default_sa.go:34] waiting for default service account to be created ...
	I1124 02:25:38.695570  350425 default_sa.go:45] found service account: "default"
	I1124 02:25:38.695589  350425 default_sa.go:55] duration metric: took 1.852501ms for default service account to be created ...
	I1124 02:25:38.695598  350425 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 02:25:38.699495  350425 system_pods.go:86] 20 kube-system pods found
	I1124 02:25:38.699536  350425 system_pods.go:89] "amd-gpu-device-plugin-6f6fp" [f718fb18-20f0-4f35-931c-3a308c345e99] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 02:25:38.699548  350425 system_pods.go:89] "coredns-66bc5c9577-rdmxf" [89daa76d-6cb5-46b6-80c2-e6feea646c06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 02:25:38.699564  350425 system_pods.go:89] "csi-hostpath-attacher-0" [dfd1a64e-cd18-4f88-9ae8-933351bf5cff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 02:25:38.699578  350425 system_pods.go:89] "csi-hostpath-resizer-0" [afe0c71b-407a-426f-9323-d835b8f2e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 02:25:38.699588  350425 system_pods.go:89] "csi-hostpathplugin-lmkkf" [c8b973a0-b84c-4c2d-b283-00643dfccac7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 02:25:38.699595  350425 system_pods.go:89] "etcd-addons-831846" [bc9df316-a146-46c0-a791-a90e33b10de5] Running
	I1124 02:25:38.699606  350425 system_pods.go:89] "kindnet-8rv8j" [e51a1ca8-a9af-4c1f-bdaf-27b503467a22] Running
	I1124 02:25:38.699612  350425 system_pods.go:89] "kube-apiserver-addons-831846" [fb66712c-5e47-4256-906c-544eac5dbb55] Running
	I1124 02:25:38.699620  350425 system_pods.go:89] "kube-controller-manager-addons-831846" [dcc8ac48-94da-440d-aa7c-f5563637926a] Running
	I1124 02:25:38.699630  350425 system_pods.go:89] "kube-ingress-dns-minikube" [a8c07c0c-a689-4a36-98ed-b97c0d8c59e2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 02:25:38.699641  350425 system_pods.go:89] "kube-proxy-78b65" [78176f75-edaa-46ac-803c-1f08847b0345] Running
	I1124 02:25:38.699647  350425 system_pods.go:89] "kube-scheduler-addons-831846" [91ee0d85-a527-476c-bedd-b5ec3faa6ee8] Running
	I1124 02:25:38.699655  350425 system_pods.go:89] "metrics-server-85b7d694d7-jmkg5" [a7773bc3-3047-45ae-ac07-238fe7a6282f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 02:25:38.699669  350425 system_pods.go:89] "nvidia-device-plugin-daemonset-gf6tr" [59b3a9aa-53dd-4231-97bd-2015d666639c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 02:25:38.699680  350425 system_pods.go:89] "registry-6b586f9694-fmpk9" [f51b7a5d-73cd-404e-87db-f7b56c46e8fc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 02:25:38.699692  350425 system_pods.go:89] "registry-creds-764b6fb674-h45vm" [8eac9d40-11df-4d1b-b4ed-05e91b6db498] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 02:25:38.699703  350425 system_pods.go:89] "registry-proxy-qnxkh" [492c9528-7caf-47b5-86ce-62e1cf455391] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 02:25:38.699713  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5mjkq" [77bb303b-f029-49fb-bc4a-12059b110a5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:38.699727  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hhf7t" [cffe1923-8bb2-43b0-a27d-15c013c3e481] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:38.699735  350425 system_pods.go:89] "storage-provisioner" [2bb8d9da-7fe4-4c71-b117-512944dd1208] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 02:25:38.699754  350425 retry.go:31] will retry after 224.455967ms: missing components: kube-dns
	I1124 02:25:38.796190  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:38.928493  350425 system_pods.go:86] 20 kube-system pods found
	I1124 02:25:38.928535  350425 system_pods.go:89] "amd-gpu-device-plugin-6f6fp" [f718fb18-20f0-4f35-931c-3a308c345e99] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 02:25:38.928547  350425 system_pods.go:89] "coredns-66bc5c9577-rdmxf" [89daa76d-6cb5-46b6-80c2-e6feea646c06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 02:25:38.928558  350425 system_pods.go:89] "csi-hostpath-attacher-0" [dfd1a64e-cd18-4f88-9ae8-933351bf5cff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 02:25:38.928565  350425 system_pods.go:89] "csi-hostpath-resizer-0" [afe0c71b-407a-426f-9323-d835b8f2e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 02:25:38.928578  350425 system_pods.go:89] "csi-hostpathplugin-lmkkf" [c8b973a0-b84c-4c2d-b283-00643dfccac7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 02:25:38.928589  350425 system_pods.go:89] "etcd-addons-831846" [bc9df316-a146-46c0-a791-a90e33b10de5] Running
	I1124 02:25:38.928595  350425 system_pods.go:89] "kindnet-8rv8j" [e51a1ca8-a9af-4c1f-bdaf-27b503467a22] Running
	I1124 02:25:38.928603  350425 system_pods.go:89] "kube-apiserver-addons-831846" [fb66712c-5e47-4256-906c-544eac5dbb55] Running
	I1124 02:25:38.928608  350425 system_pods.go:89] "kube-controller-manager-addons-831846" [dcc8ac48-94da-440d-aa7c-f5563637926a] Running
	I1124 02:25:38.928616  350425 system_pods.go:89] "kube-ingress-dns-minikube" [a8c07c0c-a689-4a36-98ed-b97c0d8c59e2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 02:25:38.928619  350425 system_pods.go:89] "kube-proxy-78b65" [78176f75-edaa-46ac-803c-1f08847b0345] Running
	I1124 02:25:38.928625  350425 system_pods.go:89] "kube-scheduler-addons-831846" [91ee0d85-a527-476c-bedd-b5ec3faa6ee8] Running
	I1124 02:25:38.928632  350425 system_pods.go:89] "metrics-server-85b7d694d7-jmkg5" [a7773bc3-3047-45ae-ac07-238fe7a6282f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 02:25:38.928638  350425 system_pods.go:89] "nvidia-device-plugin-daemonset-gf6tr" [59b3a9aa-53dd-4231-97bd-2015d666639c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 02:25:38.928643  350425 system_pods.go:89] "registry-6b586f9694-fmpk9" [f51b7a5d-73cd-404e-87db-f7b56c46e8fc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 02:25:38.928649  350425 system_pods.go:89] "registry-creds-764b6fb674-h45vm" [8eac9d40-11df-4d1b-b4ed-05e91b6db498] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 02:25:38.928654  350425 system_pods.go:89] "registry-proxy-qnxkh" [492c9528-7caf-47b5-86ce-62e1cf455391] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 02:25:38.928659  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5mjkq" [77bb303b-f029-49fb-bc4a-12059b110a5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:38.928667  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hhf7t" [cffe1923-8bb2-43b0-a27d-15c013c3e481] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:38.928672  350425 system_pods.go:89] "storage-provisioner" [2bb8d9da-7fe4-4c71-b117-512944dd1208] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 02:25:38.928689  350425 retry.go:31] will retry after 339.908579ms: missing components: kube-dns
	I1124 02:25:38.991612  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:39.070309  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:39.070383  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:39.273969  350425 system_pods.go:86] 20 kube-system pods found
	I1124 02:25:39.274007  350425 system_pods.go:89] "amd-gpu-device-plugin-6f6fp" [f718fb18-20f0-4f35-931c-3a308c345e99] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 02:25:39.274019  350425 system_pods.go:89] "coredns-66bc5c9577-rdmxf" [89daa76d-6cb5-46b6-80c2-e6feea646c06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 02:25:39.274029  350425 system_pods.go:89] "csi-hostpath-attacher-0" [dfd1a64e-cd18-4f88-9ae8-933351bf5cff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 02:25:39.274038  350425 system_pods.go:89] "csi-hostpath-resizer-0" [afe0c71b-407a-426f-9323-d835b8f2e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 02:25:39.274048  350425 system_pods.go:89] "csi-hostpathplugin-lmkkf" [c8b973a0-b84c-4c2d-b283-00643dfccac7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 02:25:39.274057  350425 system_pods.go:89] "etcd-addons-831846" [bc9df316-a146-46c0-a791-a90e33b10de5] Running
	I1124 02:25:39.274065  350425 system_pods.go:89] "kindnet-8rv8j" [e51a1ca8-a9af-4c1f-bdaf-27b503467a22] Running
	I1124 02:25:39.274074  350425 system_pods.go:89] "kube-apiserver-addons-831846" [fb66712c-5e47-4256-906c-544eac5dbb55] Running
	I1124 02:25:39.274081  350425 system_pods.go:89] "kube-controller-manager-addons-831846" [dcc8ac48-94da-440d-aa7c-f5563637926a] Running
	I1124 02:25:39.274093  350425 system_pods.go:89] "kube-ingress-dns-minikube" [a8c07c0c-a689-4a36-98ed-b97c0d8c59e2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 02:25:39.274098  350425 system_pods.go:89] "kube-proxy-78b65" [78176f75-edaa-46ac-803c-1f08847b0345] Running
	I1124 02:25:39.274105  350425 system_pods.go:89] "kube-scheduler-addons-831846" [91ee0d85-a527-476c-bedd-b5ec3faa6ee8] Running
	I1124 02:25:39.274114  350425 system_pods.go:89] "metrics-server-85b7d694d7-jmkg5" [a7773bc3-3047-45ae-ac07-238fe7a6282f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 02:25:39.274128  350425 system_pods.go:89] "nvidia-device-plugin-daemonset-gf6tr" [59b3a9aa-53dd-4231-97bd-2015d666639c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 02:25:39.274139  350425 system_pods.go:89] "registry-6b586f9694-fmpk9" [f51b7a5d-73cd-404e-87db-f7b56c46e8fc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 02:25:39.274154  350425 system_pods.go:89] "registry-creds-764b6fb674-h45vm" [8eac9d40-11df-4d1b-b4ed-05e91b6db498] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 02:25:39.274163  350425 system_pods.go:89] "registry-proxy-qnxkh" [492c9528-7caf-47b5-86ce-62e1cf455391] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 02:25:39.274172  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5mjkq" [77bb303b-f029-49fb-bc4a-12059b110a5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:39.274181  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hhf7t" [cffe1923-8bb2-43b0-a27d-15c013c3e481] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:39.274188  350425 system_pods.go:89] "storage-provisioner" [2bb8d9da-7fe4-4c71-b117-512944dd1208] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 02:25:39.274214  350425 retry.go:31] will retry after 442.797405ms: missing components: kube-dns
	I1124 02:25:39.296834  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:39.490754  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:39.570959  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:39.571204  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:39.722201  350425 system_pods.go:86] 20 kube-system pods found
	I1124 02:25:39.722235  350425 system_pods.go:89] "amd-gpu-device-plugin-6f6fp" [f718fb18-20f0-4f35-931c-3a308c345e99] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 02:25:39.722246  350425 system_pods.go:89] "coredns-66bc5c9577-rdmxf" [89daa76d-6cb5-46b6-80c2-e6feea646c06] Running
	I1124 02:25:39.722255  350425 system_pods.go:89] "csi-hostpath-attacher-0" [dfd1a64e-cd18-4f88-9ae8-933351bf5cff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 02:25:39.722260  350425 system_pods.go:89] "csi-hostpath-resizer-0" [afe0c71b-407a-426f-9323-d835b8f2e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 02:25:39.722265  350425 system_pods.go:89] "csi-hostpathplugin-lmkkf" [c8b973a0-b84c-4c2d-b283-00643dfccac7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 02:25:39.722269  350425 system_pods.go:89] "etcd-addons-831846" [bc9df316-a146-46c0-a791-a90e33b10de5] Running
	I1124 02:25:39.722274  350425 system_pods.go:89] "kindnet-8rv8j" [e51a1ca8-a9af-4c1f-bdaf-27b503467a22] Running
	I1124 02:25:39.722277  350425 system_pods.go:89] "kube-apiserver-addons-831846" [fb66712c-5e47-4256-906c-544eac5dbb55] Running
	I1124 02:25:39.722281  350425 system_pods.go:89] "kube-controller-manager-addons-831846" [dcc8ac48-94da-440d-aa7c-f5563637926a] Running
	I1124 02:25:39.722287  350425 system_pods.go:89] "kube-ingress-dns-minikube" [a8c07c0c-a689-4a36-98ed-b97c0d8c59e2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 02:25:39.722290  350425 system_pods.go:89] "kube-proxy-78b65" [78176f75-edaa-46ac-803c-1f08847b0345] Running
	I1124 02:25:39.722294  350425 system_pods.go:89] "kube-scheduler-addons-831846" [91ee0d85-a527-476c-bedd-b5ec3faa6ee8] Running
	I1124 02:25:39.722301  350425 system_pods.go:89] "metrics-server-85b7d694d7-jmkg5" [a7773bc3-3047-45ae-ac07-238fe7a6282f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 02:25:39.722307  350425 system_pods.go:89] "nvidia-device-plugin-daemonset-gf6tr" [59b3a9aa-53dd-4231-97bd-2015d666639c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 02:25:39.722312  350425 system_pods.go:89] "registry-6b586f9694-fmpk9" [f51b7a5d-73cd-404e-87db-f7b56c46e8fc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 02:25:39.722317  350425 system_pods.go:89] "registry-creds-764b6fb674-h45vm" [8eac9d40-11df-4d1b-b4ed-05e91b6db498] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 02:25:39.722322  350425 system_pods.go:89] "registry-proxy-qnxkh" [492c9528-7caf-47b5-86ce-62e1cf455391] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 02:25:39.722330  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5mjkq" [77bb303b-f029-49fb-bc4a-12059b110a5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:39.722335  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hhf7t" [cffe1923-8bb2-43b0-a27d-15c013c3e481] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:39.722338  350425 system_pods.go:89] "storage-provisioner" [2bb8d9da-7fe4-4c71-b117-512944dd1208] Running
	I1124 02:25:39.722346  350425 system_pods.go:126] duration metric: took 1.02674263s to wait for k8s-apps to be running ...
	I1124 02:25:39.722356  350425 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 02:25:39.722398  350425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 02:25:39.736051  350425 system_svc.go:56] duration metric: took 13.685185ms WaitForService to wait for kubelet
	I1124 02:25:39.736082  350425 kubeadm.go:587] duration metric: took 42.203274661s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 02:25:39.736107  350425 node_conditions.go:102] verifying NodePressure condition ...
	I1124 02:25:39.738344  350425 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 02:25:39.738376  350425 node_conditions.go:123] node cpu capacity is 8
	I1124 02:25:39.738399  350425 node_conditions.go:105] duration metric: took 2.286417ms to run NodePressure ...
	I1124 02:25:39.738419  350425 start.go:242] waiting for startup goroutines ...
	I1124 02:25:39.796318  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:39.991872  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:40.071046  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:40.071045  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:40.298206  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:40.492088  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:40.573703  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:40.574126  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:40.797902  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:40.990807  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:41.070597  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:41.070634  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:41.297726  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:41.491716  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:41.571221  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:41.571292  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:41.797624  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:41.991431  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:42.091665  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:42.091741  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:42.297004  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:42.491168  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:42.591161  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:42.591349  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:42.797938  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:42.990798  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:43.070698  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:43.070791  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:43.297876  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:43.491168  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:43.571367  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:43.571437  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:43.797310  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:43.991167  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:44.070453  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:44.070517  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:44.297450  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:44.492238  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:44.572528  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:44.572739  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:44.797931  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:44.990775  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:45.070835  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:45.070872  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:45.298382  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:45.491734  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:45.570796  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:45.570879  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:45.798401  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:45.991542  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:46.071413  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:46.071611  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:46.297064  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:46.491361  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:46.571306  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:46.571497  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:46.861590  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:46.991159  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:47.070643  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:47.070712  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:47.297984  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:47.491408  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:47.571234  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:47.571450  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:47.797711  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:47.991560  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:48.071051  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:48.071072  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:48.298420  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:48.491570  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:48.570247  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:48.570269  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:48.797744  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:48.991255  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:49.071026  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:49.071235  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:49.297714  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:49.490422  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:49.571099  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:49.571216  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:49.797953  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:49.990730  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:50.070505  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:50.070756  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:50.297207  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:50.491401  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:50.571288  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:50.571471  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:50.797767  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:50.990209  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:51.070593  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:51.070643  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:51.296929  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:51.490936  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:51.570230  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:51.570342  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:51.797864  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:51.991286  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:52.071462  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:52.071462  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:52.297229  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:52.491080  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:52.570939  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:52.571091  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:52.797903  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:52.991520  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:53.071466  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:53.071483  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:53.297548  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:53.491479  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:53.570828  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:53.571084  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:53.797490  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:53.991640  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:54.070026  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:54.070144  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:54.297478  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:54.491486  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:54.571355  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:54.571651  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:54.797195  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:54.991535  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:55.070480  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:55.070510  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:55.297379  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:55.491660  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:55.592135  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:55.592161  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:55.796822  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:55.990313  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:56.070804  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:56.070829  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:56.297262  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:56.491743  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:56.570388  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:56.570397  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:56.797150  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:56.991019  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:57.070278  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:57.070297  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:57.296532  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:57.491143  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:57.570325  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:57.570353  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:57.797185  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:57.991734  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:58.071121  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:58.071299  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:58.297275  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:58.493392  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:58.571494  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:58.571545  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:58.797651  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:58.991126  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:59.071428  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:59.071524  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:59.296735  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:59.490327  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:59.571232  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:59.571268  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:59.797268  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:59.991186  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:00.070347  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:00.070448  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:00.297114  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:00.491434  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:00.571219  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:00.571312  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:00.796706  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:00.990520  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:01.069911  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:01.069946  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:01.297626  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:01.492045  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:01.570481  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:01.570523  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:01.797391  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:01.991331  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:02.070484  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:02.070496  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:02.297007  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:02.492314  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:02.571391  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:02.571398  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:02.797245  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:02.991447  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:03.070811  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:03.070877  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:03.297413  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:03.492866  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:03.573309  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:03.573914  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:03.798578  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:03.993101  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:04.071298  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:04.072298  350425 kapi.go:107] duration metric: took 1m5.004955371s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 02:26:04.297731  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:04.490266  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:04.570557  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:04.797580  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:04.991873  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:05.070803  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:05.297949  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:05.491811  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:05.570408  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:05.797548  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:05.991417  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:06.071627  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:06.297559  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:06.491153  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:06.572016  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:06.797239  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:06.992079  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:07.070541  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:07.297085  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:07.490427  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:07.569622  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:07.797916  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:07.991451  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:08.075332  350425 kapi.go:107] duration metric: took 1m9.008032916s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 02:26:08.297549  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:08.491073  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:08.798592  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:08.992671  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:09.297139  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:09.491499  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:09.796919  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:09.991273  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:10.297775  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:10.490630  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:10.798175  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:10.992217  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:11.297291  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:11.491287  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:11.796491  350425 kapi.go:107] duration metric: took 1m6.002302696s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 02:26:11.797907  350425 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-831846 cluster.
	I1124 02:26:11.798965  350425 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 02:26:11.799915  350425 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 02:26:11.991728  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:12.491029  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:12.992038  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:13.490761  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:13.991344  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:14.490831  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:14.990909  350425 kapi.go:107] duration metric: took 1m15.503183706s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 02:26:14.992794  350425 out.go:179] * Enabled addons: amd-gpu-device-plugin, ingress-dns, cloud-spanner, registry-creds, storage-provisioner-rancher, inspektor-gadget, nvidia-device-plugin, storage-provisioner, default-storageclass, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1124 02:26:14.993940  350425 addons.go:530] duration metric: took 1m17.461113384s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns cloud-spanner registry-creds storage-provisioner-rancher inspektor-gadget nvidia-device-plugin storage-provisioner default-storageclass metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1124 02:26:14.993984  350425 start.go:247] waiting for cluster config update ...
	I1124 02:26:14.994010  350425 start.go:256] writing updated cluster config ...
	I1124 02:26:14.994291  350425 ssh_runner.go:195] Run: rm -f paused
	I1124 02:26:14.998351  350425 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 02:26:15.000903  350425 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rdmxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.004367  350425 pod_ready.go:94] pod "coredns-66bc5c9577-rdmxf" is "Ready"
	I1124 02:26:15.004386  350425 pod_ready.go:86] duration metric: took 3.460188ms for pod "coredns-66bc5c9577-rdmxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.005993  350425 pod_ready.go:83] waiting for pod "etcd-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.009103  350425 pod_ready.go:94] pod "etcd-addons-831846" is "Ready"
	I1124 02:26:15.009125  350425 pod_ready.go:86] duration metric: took 3.115113ms for pod "etcd-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.010750  350425 pod_ready.go:83] waiting for pod "kube-apiserver-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.014072  350425 pod_ready.go:94] pod "kube-apiserver-addons-831846" is "Ready"
	I1124 02:26:15.014093  350425 pod_ready.go:86] duration metric: took 3.326624ms for pod "kube-apiserver-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.015698  350425 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.401167  350425 pod_ready.go:94] pod "kube-controller-manager-addons-831846" is "Ready"
	I1124 02:26:15.401191  350425 pod_ready.go:86] duration metric: took 385.474121ms for pod "kube-controller-manager-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.601541  350425 pod_ready.go:83] waiting for pod "kube-proxy-78b65" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:16.001548  350425 pod_ready.go:94] pod "kube-proxy-78b65" is "Ready"
	I1124 02:26:16.001584  350425 pod_ready.go:86] duration metric: took 400.020096ms for pod "kube-proxy-78b65" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:16.202402  350425 pod_ready.go:83] waiting for pod "kube-scheduler-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:16.601245  350425 pod_ready.go:94] pod "kube-scheduler-addons-831846" is "Ready"
	I1124 02:26:16.601275  350425 pod_ready.go:86] duration metric: took 398.843581ms for pod "kube-scheduler-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:16.601290  350425 pod_ready.go:40] duration metric: took 1.6029084s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 02:26:16.642752  350425 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 02:26:16.644366  350425 out.go:179] * Done! kubectl is now configured to use "addons-831846" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 02:29:03 addons-831846 crio[773]: time="2025-11-24T02:29:03.799460962Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-m97s5/POD" id=189d6fec-9e9c-4fa1-8a28-34513917945c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 02:29:03 addons-831846 crio[773]: time="2025-11-24T02:29:03.79953337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 02:29:03 addons-831846 crio[773]: time="2025-11-24T02:29:03.806231768Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-m97s5 Namespace:default ID:8a8f0e761b692faffd51c51d9299b126d9f8257abdbc91d1c125003dbcd64cea UID:e3d0e416-4d4f-4429-a844-ba56182ca85f NetNS:/var/run/netns/efcd6592-6875-41c2-b938-ef40511bed17 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008bb10}] Aliases:map[]}"
	Nov 24 02:29:03 addons-831846 crio[773]: time="2025-11-24T02:29:03.806279293Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-m97s5 to CNI network \"kindnet\" (type=ptp)"
	Nov 24 02:29:03 addons-831846 crio[773]: time="2025-11-24T02:29:03.825163822Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-m97s5 Namespace:default ID:8a8f0e761b692faffd51c51d9299b126d9f8257abdbc91d1c125003dbcd64cea UID:e3d0e416-4d4f-4429-a844-ba56182ca85f NetNS:/var/run/netns/efcd6592-6875-41c2-b938-ef40511bed17 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008bb10}] Aliases:map[]}"
	Nov 24 02:29:03 addons-831846 crio[773]: time="2025-11-24T02:29:03.825288059Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-m97s5 for CNI network kindnet (type=ptp)"
	Nov 24 02:29:03 addons-831846 crio[773]: time="2025-11-24T02:29:03.826039621Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 02:29:03 addons-831846 crio[773]: time="2025-11-24T02:29:03.826814383Z" level=info msg="Ran pod sandbox 8a8f0e761b692faffd51c51d9299b126d9f8257abdbc91d1c125003dbcd64cea with infra container: default/hello-world-app-5d498dc89-m97s5/POD" id=189d6fec-9e9c-4fa1-8a28-34513917945c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 02:29:03 addons-831846 crio[773]: time="2025-11-24T02:29:03.828013484Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=4b7b53eb-d0cd-483d-a06d-c211328e4889 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 02:29:03 addons-831846 crio[773]: time="2025-11-24T02:29:03.82813475Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=4b7b53eb-d0cd-483d-a06d-c211328e4889 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 02:29:03 addons-831846 crio[773]: time="2025-11-24T02:29:03.828183582Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=4b7b53eb-d0cd-483d-a06d-c211328e4889 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 02:29:03 addons-831846 crio[773]: time="2025-11-24T02:29:03.828803867Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=0dbe2c20-3326-4920-a215-a4a5d91646d4 name=/runtime.v1.ImageService/PullImage
	Nov 24 02:29:03 addons-831846 crio[773]: time="2025-11-24T02:29:03.832740305Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 24 02:29:04 addons-831846 crio[773]: time="2025-11-24T02:29:04.635456928Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=0dbe2c20-3326-4920-a215-a4a5d91646d4 name=/runtime.v1.ImageService/PullImage
	Nov 24 02:29:04 addons-831846 crio[773]: time="2025-11-24T02:29:04.636008388Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=29008436-9ff6-48e9-acfd-a7deb2888262 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 02:29:04 addons-831846 crio[773]: time="2025-11-24T02:29:04.637287953Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=9fc36362-e07e-42e3-99ff-0bf136c10a20 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 02:29:04 addons-831846 crio[773]: time="2025-11-24T02:29:04.641062986Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-m97s5/hello-world-app" id=06468c1d-4dce-4a4f-b7e8-48d800687e04 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 02:29:04 addons-831846 crio[773]: time="2025-11-24T02:29:04.641205978Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 02:29:04 addons-831846 crio[773]: time="2025-11-24T02:29:04.647009393Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 02:29:04 addons-831846 crio[773]: time="2025-11-24T02:29:04.647161852Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3c72109e5b3e9738019f6653889b1ca307723bd539d292607fc8895cb842cbf1/merged/etc/passwd: no such file or directory"
	Nov 24 02:29:04 addons-831846 crio[773]: time="2025-11-24T02:29:04.647185073Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3c72109e5b3e9738019f6653889b1ca307723bd539d292607fc8895cb842cbf1/merged/etc/group: no such file or directory"
	Nov 24 02:29:04 addons-831846 crio[773]: time="2025-11-24T02:29:04.64738578Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 02:29:04 addons-831846 crio[773]: time="2025-11-24T02:29:04.677672959Z" level=info msg="Created container 91e40f5166526793e5d6a9df9ed6fca6536c60d1e4dcecb9c3c8e69168d37ee5: default/hello-world-app-5d498dc89-m97s5/hello-world-app" id=06468c1d-4dce-4a4f-b7e8-48d800687e04 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 02:29:04 addons-831846 crio[773]: time="2025-11-24T02:29:04.678295344Z" level=info msg="Starting container: 91e40f5166526793e5d6a9df9ed6fca6536c60d1e4dcecb9c3c8e69168d37ee5" id=83b34e97-1a48-4c36-8a0e-d60222290067 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 02:29:04 addons-831846 crio[773]: time="2025-11-24T02:29:04.68017642Z" level=info msg="Started container" PID=9985 containerID=91e40f5166526793e5d6a9df9ed6fca6536c60d1e4dcecb9c3c8e69168d37ee5 description=default/hello-world-app-5d498dc89-m97s5/hello-world-app id=83b34e97-1a48-4c36-8a0e-d60222290067 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8a8f0e761b692faffd51c51d9299b126d9f8257abdbc91d1c125003dbcd64cea
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	91e40f5166526       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   8a8f0e761b692       hello-world-app-5d498dc89-m97s5            default
	f299b32d7aec3       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   bd7b856f8557a       registry-creds-764b6fb674-h45vm            kube-system
	af29ddade9c4b       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   bf94980a00afc       nginx                                      default
	74ffea65b1dc0       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   1597d1f02aae2       busybox                                    default
	6a692d55674d0       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   3211430f96d27       csi-hostpathplugin-lmkkf                   kube-system
	80aafa11113da       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   3211430f96d27       csi-hostpathplugin-lmkkf                   kube-system
	ab306614f72c4       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   3211430f96d27       csi-hostpathplugin-lmkkf                   kube-system
	e3eafecec2c9c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   650604de0c37c       gcp-auth-78565c9fb4-g9pxh                  gcp-auth
	87464323b0915       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   3211430f96d27       csi-hostpathplugin-lmkkf                   kube-system
	04b1500539ee9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago            Running             gadget                                   0                   3ef36a913e4fa       gadget-465gm                               gadget
	5b6dca54cd1ba       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   3211430f96d27       csi-hostpathplugin-lmkkf                   kube-system
	dce04ca4ae70e       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago            Running             controller                               0                   de71ba0fd8f3c       ingress-nginx-controller-6c8bf45fb-645nl   ingress-nginx
	cb685f958daee       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   1877d0221d21f       registry-proxy-qnxkh                       kube-system
	d36d11b30d634       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   580fcf6ee6a7d       amd-gpu-device-plugin-6f6fp                kube-system
	488f8f5aecccf       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   4eecb3937ec7e       snapshot-controller-7d9fbc56b8-5mjkq       kube-system
	3a2f505270ed3       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   3211430f96d27       csi-hostpathplugin-lmkkf                   kube-system
	98efe6015f81d       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   f4914870f3b6c       nvidia-device-plugin-daemonset-gf6tr       kube-system
	20ce1e4b1e717       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   d44e54ff5a3e7       local-path-provisioner-648f6765c9-r8p78    local-path-storage
	e716ff90bdcb5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago            Exited              patch                                    0                   2e45b3ee81df6       ingress-nginx-admission-patch-sd9fv        ingress-nginx
	c290117d4f470       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   9f4712529e2ba       registry-6b586f9694-fmpk9                  kube-system
	87618f57415b4       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   7c919e329d34b       yakd-dashboard-5ff678cb9-2mtd5             yakd-dashboard
	1a1040cba9828       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   af193c1fc2b23       csi-hostpath-attacher-0                    kube-system
	7315919ab4d42       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   396609098de55       csi-hostpath-resizer-0                     kube-system
	f1c6f93620483       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   c1398ac562604       kube-ingress-dns-minikube                  kube-system
	ab2275b7d143a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   dd5a845edb903       snapshot-controller-7d9fbc56b8-hhf7t       kube-system
	cc283ff38672b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago            Exited              create                                   0                   9577123c0f1ed       ingress-nginx-admission-create-lkj84       ingress-nginx
	70c9e18564a35       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   cd82be8668084       cloud-spanner-emulator-5bdddb765-wf4l7     default
	211b6a7c5a0f7       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   969f4caf1f117       metrics-server-85b7d694d7-jmkg5            kube-system
	6be1f10bddb9d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   f9a6c6fa1e19b       storage-provisioner                        kube-system
	109ca0df89d74       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   fb91c1d3e22ea       coredns-66bc5c9577-rdmxf                   kube-system
	aac60890d17a3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   d5c64b563aaf9       kube-proxy-78b65                           kube-system
	837f7d173b2d6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   b60e8128a7968       kindnet-8rv8j                              kube-system
	3949ef8e07cb2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   e79da0e4cb0e4       kube-scheduler-addons-831846               kube-system
	9c9ff4c6ef4b7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   7e1df73d57368       kube-apiserver-addons-831846               kube-system
	a1f1f10128909       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   0e84c8f3a8b7b       kube-controller-manager-addons-831846      kube-system
	8704fbfea0bb0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   45ab090f2b747       etcd-addons-831846                         kube-system
	
	
	==> coredns [109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314] <==
	[INFO] 10.244.0.22:47080 - 2782 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086745s
	[INFO] 10.244.0.22:58455 - 62682 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004583697s
	[INFO] 10.244.0.22:53011 - 65406 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005329066s
	[INFO] 10.244.0.22:41941 - 57592 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004696961s
	[INFO] 10.244.0.22:58228 - 37793 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006173156s
	[INFO] 10.244.0.22:57028 - 2235 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00552887s
	[INFO] 10.244.0.22:58224 - 62800 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005634499s
	[INFO] 10.244.0.22:39374 - 22570 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000803743s
	[INFO] 10.244.0.22:36319 - 53376 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001301863s
	[INFO] 10.244.0.24:34476 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000231359s
	[INFO] 10.244.0.24:36770 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000205062s
	[INFO] 10.244.0.31:38069 - 2953 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000239147s
	[INFO] 10.244.0.31:52429 - 10014 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000297727s
	[INFO] 10.244.0.31:49277 - 22755 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000115379s
	[INFO] 10.244.0.31:49498 - 414 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000150826s
	[INFO] 10.244.0.31:37116 - 50374 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000095796s
	[INFO] 10.244.0.31:39416 - 52817 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000172663s
	[INFO] 10.244.0.31:44599 - 55972 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.005338759s
	[INFO] 10.244.0.31:46958 - 26556 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.005425912s
	[INFO] 10.244.0.31:54060 - 30358 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005656424s
	[INFO] 10.244.0.31:39768 - 13105 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.006221561s
	[INFO] 10.244.0.31:48005 - 43375 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004347155s
	[INFO] 10.244.0.31:34256 - 13250 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004679937s
	[INFO] 10.244.0.31:53161 - 42777 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001401106s
	[INFO] 10.244.0.31:40836 - 36712 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001845428s
	
	
	==> describe nodes <==
	Name:               addons-831846
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-831846
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=addons-831846
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T02_24_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-831846
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-831846"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 02:24:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-831846
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 02:28:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 02:28:55 +0000   Mon, 24 Nov 2025 02:24:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 02:28:55 +0000   Mon, 24 Nov 2025 02:24:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 02:28:55 +0000   Mon, 24 Nov 2025 02:24:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 02:28:55 +0000   Mon, 24 Nov 2025 02:25:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-831846
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                66ee3362-62e8-4675-bb66-01d23f6ba5e0
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  default                     cloud-spanner-emulator-5bdddb765-wf4l7      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  default                     hello-world-app-5d498dc89-m97s5             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-465gm                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  gcp-auth                    gcp-auth-78565c9fb4-g9pxh                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-645nl    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m5s
	  kube-system                 amd-gpu-device-plugin-6f6fp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 coredns-66bc5c9577-rdmxf                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m7s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 csi-hostpathplugin-lmkkf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 etcd-addons-831846                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m13s
	  kube-system                 kindnet-8rv8j                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m7s
	  kube-system                 kube-apiserver-addons-831846                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-controller-manager-addons-831846       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-proxy-78b65                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-scheduler-addons-831846                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m13s
	  kube-system                 metrics-server-85b7d694d7-jmkg5             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m6s
	  kube-system                 nvidia-device-plugin-daemonset-gf6tr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 registry-6b586f9694-fmpk9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 registry-creds-764b6fb674-h45vm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 registry-proxy-qnxkh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m26s
	  kube-system                 snapshot-controller-7d9fbc56b8-5mjkq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 snapshot-controller-7d9fbc56b8-hhf7t        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m5s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  local-path-storage          local-path-provisioner-648f6765c9-r8p78     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-2mtd5              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  Starting                 4m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m18s (x8 over 4m18s)  kubelet          Node addons-831846 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m18s (x8 over 4m18s)  kubelet          Node addons-831846 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m18s (x8 over 4m18s)  kubelet          Node addons-831846 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m13s                  kubelet          Node addons-831846 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m13s                  kubelet          Node addons-831846 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m13s                  kubelet          Node addons-831846 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m8s                   node-controller  Node addons-831846 event: Registered Node addons-831846 in Controller
	  Normal  NodeReady                3m26s                  kubelet          Node addons-831846 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 a4 5e 1f c0 90 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 ca fc 5f 92 50 08 06
	[Nov24 02:26] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.010203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023866] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +2.047771] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[Nov24 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +8.191144] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[ +16.382391] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[ +32.252621] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	
	
	==> etcd [8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971] <==
	{"level":"warn","ts":"2025-11-24T02:24:48.734398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.740078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.747967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.754578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.761678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.775599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.781597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.787170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.832605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:59.949947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:59.956148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:25:26.233245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:25:26.240228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:25:26.264837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56442","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T02:26:18.883302Z","caller":"traceutil/trace.go:172","msg":"trace[636580253] transaction","detail":"{read_only:false; response_revision:1255; number_of_response:1; }","duration":"118.447794ms","start":"2025-11-24T02:26:18.764832Z","end":"2025-11-24T02:26:18.883279Z","steps":["trace[636580253] 'process raft request'  (duration: 118.333623ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:26:18.883351Z","caller":"traceutil/trace.go:172","msg":"trace[1052960422] transaction","detail":"{read_only:false; response_revision:1259; number_of_response:1; }","duration":"115.094279ms","start":"2025-11-24T02:26:18.768245Z","end":"2025-11-24T02:26:18.883339Z","steps":["trace[1052960422] 'process raft request'  (duration: 115.059494ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:26:18.883355Z","caller":"traceutil/trace.go:172","msg":"trace[509223276] transaction","detail":"{read_only:false; response_revision:1254; number_of_response:1; }","duration":"118.815185ms","start":"2025-11-24T02:26:18.764525Z","end":"2025-11-24T02:26:18.883340Z","steps":["trace[509223276] 'process raft request'  (duration: 118.549037ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:26:18.883388Z","caller":"traceutil/trace.go:172","msg":"trace[1791445000] transaction","detail":"{read_only:false; response_revision:1257; number_of_response:1; }","duration":"118.518913ms","start":"2025-11-24T02:26:18.764856Z","end":"2025-11-24T02:26:18.883375Z","steps":["trace[1791445000] 'process raft request'  (duration: 118.378576ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:26:18.883544Z","caller":"traceutil/trace.go:172","msg":"trace[1797184880] transaction","detail":"{read_only:false; response_revision:1258; number_of_response:1; }","duration":"117.15402ms","start":"2025-11-24T02:26:18.766382Z","end":"2025-11-24T02:26:18.883536Z","steps":["trace[1797184880] 'process raft request'  (duration: 116.884835ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:26:18.883321Z","caller":"traceutil/trace.go:172","msg":"trace[71626399] transaction","detail":"{read_only:false; response_revision:1256; number_of_response:1; }","duration":"118.465711ms","start":"2025-11-24T02:26:18.764840Z","end":"2025-11-24T02:26:18.883306Z","steps":["trace[71626399] 'process raft request'  (duration: 118.364666ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T02:26:19.068301Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.284043ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox\" limit:1 ","response":"range_response_count:1 size:3206"}
	{"level":"info","ts":"2025-11-24T02:26:19.068373Z","caller":"traceutil/trace.go:172","msg":"trace[1740405195] range","detail":"{range_begin:/registry/pods/default/busybox; range_end:; response_count:1; response_revision:1259; }","duration":"177.372947ms","start":"2025-11-24T02:26:18.890985Z","end":"2025-11-24T02:26:19.068357Z","steps":["trace[1740405195] 'agreement among raft nodes before linearized reading'  (duration: 76.131192ms)","trace[1740405195] 'range keys from in-memory index tree'  (duration: 101.121916ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T02:26:19.068908Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.180565ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041523998558254 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1238 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T02:26:19.069038Z","caller":"traceutil/trace.go:172","msg":"trace[687560575] transaction","detail":"{read_only:false; response_revision:1260; number_of_response:1; }","duration":"181.019435ms","start":"2025-11-24T02:26:18.887995Z","end":"2025-11-24T02:26:19.069014Z","steps":["trace[687560575] 'process raft request'  (duration: 79.161955ms)","trace[687560575] 'compare'  (duration: 101.092988ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T02:26:19.069218Z","caller":"traceutil/trace.go:172","msg":"trace[368278866] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"177.811285ms","start":"2025-11-24T02:26:18.891393Z","end":"2025-11-24T02:26:19.069204Z","steps":["trace[368278866] 'process raft request'  (duration: 177.58233ms)"],"step_count":1}
	
	
	==> gcp-auth [e3eafecec2c9ca08ae5afd8a3456082dbd372ee93aa1b42ce7e982c2a894a689] <==
	2025/11/24 02:26:11 GCP Auth Webhook started!
	2025/11/24 02:26:16 Ready to marshal response ...
	2025/11/24 02:26:16 Ready to write response ...
	2025/11/24 02:26:17 Ready to marshal response ...
	2025/11/24 02:26:17 Ready to write response ...
	2025/11/24 02:26:17 Ready to marshal response ...
	2025/11/24 02:26:17 Ready to write response ...
	2025/11/24 02:26:35 Ready to marshal response ...
	2025/11/24 02:26:35 Ready to write response ...
	2025/11/24 02:26:37 Ready to marshal response ...
	2025/11/24 02:26:37 Ready to write response ...
	2025/11/24 02:26:37 Ready to marshal response ...
	2025/11/24 02:26:37 Ready to write response ...
	2025/11/24 02:26:39 Ready to marshal response ...
	2025/11/24 02:26:39 Ready to write response ...
	2025/11/24 02:26:40 Ready to marshal response ...
	2025/11/24 02:26:40 Ready to write response ...
	2025/11/24 02:26:47 Ready to marshal response ...
	2025/11/24 02:26:47 Ready to write response ...
	2025/11/24 02:26:56 Ready to marshal response ...
	2025/11/24 02:26:56 Ready to write response ...
	2025/11/24 02:29:03 Ready to marshal response ...
	2025/11/24 02:29:03 Ready to write response ...
	
	
	==> kernel <==
	 02:29:05 up  1:11,  0 user,  load average: 0.25, 1.09, 1.66
	Linux addons-831846 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd] <==
	I1124 02:26:57.896917       1 main.go:301] handling current node
	I1124 02:27:07.897070       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:27:07.897121       1 main.go:301] handling current node
	I1124 02:27:17.897289       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:27:17.897324       1 main.go:301] handling current node
	I1124 02:27:27.897088       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:27:27.897116       1 main.go:301] handling current node
	I1124 02:27:37.896790       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:27:37.896819       1 main.go:301] handling current node
	I1124 02:27:47.896866       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:27:47.896923       1 main.go:301] handling current node
	I1124 02:27:57.897000       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:27:57.897031       1 main.go:301] handling current node
	I1124 02:28:07.897314       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:28:07.897344       1 main.go:301] handling current node
	I1124 02:28:17.897554       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:28:17.897594       1 main.go:301] handling current node
	I1124 02:28:27.896991       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:28:27.897018       1 main.go:301] handling current node
	I1124 02:28:37.897068       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:28:37.897100       1 main.go:301] handling current node
	I1124 02:28:47.897025       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:28:47.897075       1 main.go:301] handling current node
	I1124 02:28:57.897007       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:28:57.897035       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90] <==
	W1124 02:25:26.264803       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 02:25:38.162816       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.41.49:443: connect: connection refused
	E1124 02:25:38.162865       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.41.49:443: connect: connection refused" logger="UnhandledError"
	W1124 02:25:38.162908       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.41.49:443: connect: connection refused
	E1124 02:25:38.162935       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.41.49:443: connect: connection refused" logger="UnhandledError"
	W1124 02:25:38.188370       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.41.49:443: connect: connection refused
	E1124 02:25:38.188409       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.41.49:443: connect: connection refused" logger="UnhandledError"
	W1124 02:25:38.190644       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.41.49:443: connect: connection refused
	E1124 02:25:38.190679       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.41.49:443: connect: connection refused" logger="UnhandledError"
	W1124 02:25:41.580202       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 02:25:41.580240       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.203.36:443: connect: connection refused" logger="UnhandledError"
	E1124 02:25:41.580286       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1124 02:25:41.580678       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.203.36:443: connect: connection refused" logger="UnhandledError"
	E1124 02:25:41.586281       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.203.36:443: connect: connection refused" logger="UnhandledError"
	E1124 02:25:41.606985       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.203.36:443: connect: connection refused" logger="UnhandledError"
	I1124 02:25:41.680453       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 02:26:25.262158       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34290: use of closed network connection
	E1124 02:26:25.401547       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34322: use of closed network connection
	I1124 02:26:39.857500       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1124 02:26:40.048451       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.31.186"}
	I1124 02:26:50.032784       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1124 02:29:03.565347       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.98.179.201"}
	
	
	==> kube-controller-manager [a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf] <==
	I1124 02:24:56.218875       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 02:24:56.218881       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 02:24:56.218930       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 02:24:56.218937       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 02:24:56.219019       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 02:24:56.219033       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 02:24:56.219019       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 02:24:56.219364       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 02:24:56.219449       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 02:24:56.220222       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 02:24:56.220231       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 02:24:56.220250       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 02:24:56.222493       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 02:24:56.222605       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 02:24:56.223817       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 02:24:56.229046       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 02:24:56.241466       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 02:25:26.227699       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1124 02:25:26.227863       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1124 02:25:26.227953       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1124 02:25:26.250172       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1124 02:25:26.253295       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 02:25:26.328301       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 02:25:26.353412       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 02:25:41.225416       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf] <==
	I1124 02:24:57.488624       1 server_linux.go:53] "Using iptables proxy"
	I1124 02:24:57.574905       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 02:24:57.677866       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 02:24:57.677929       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 02:24:57.678911       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 02:24:57.790408       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 02:24:57.790550       1 server_linux.go:132] "Using iptables Proxier"
	I1124 02:24:57.809782       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 02:24:57.812072       1 server.go:527] "Version info" version="v1.34.1"
	I1124 02:24:57.812099       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:24:57.814296       1 config.go:200] "Starting service config controller"
	I1124 02:24:57.814390       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 02:24:57.814442       1 config.go:106] "Starting endpoint slice config controller"
	I1124 02:24:57.814468       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 02:24:57.814504       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 02:24:57.814528       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 02:24:57.815545       1 config.go:309] "Starting node config controller"
	I1124 02:24:57.816811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 02:24:57.816871       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 02:24:57.916080       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 02:24:57.916127       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 02:24:57.916163       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76] <==
	E1124 02:24:49.225926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:24:49.226043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:24:49.226579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:24:49.226615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:24:49.226621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:24:49.226688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:24:49.226742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 02:24:49.226740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 02:24:49.226793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:24:49.226822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 02:24:49.226845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:24:49.226917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 02:24:49.226943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:24:49.226930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:24:49.227019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 02:24:49.227033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:24:50.084796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:24:50.086544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:24:50.100476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:24:50.116355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 02:24:50.156384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:24:50.248073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:24:50.433678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 02:24:50.491152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 02:24:53.523908       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 02:27:04 addons-831846 kubelet[1292]: I1124 02:27:04.728435    1292 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0bd531c2-c8dd-11f0-9c04-72a795e98d3c\") pod \"1ad45499-3b79-487f-bd9a-bdea8daf9625\" (UID: \"1ad45499-3b79-487f-bd9a-bdea8daf9625\") "
	Nov 24 02:27:04 addons-831846 kubelet[1292]: I1124 02:27:04.728495    1292 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1ad45499-3b79-487f-bd9a-bdea8daf9625-gcp-creds\") pod \"1ad45499-3b79-487f-bd9a-bdea8daf9625\" (UID: \"1ad45499-3b79-487f-bd9a-bdea8daf9625\") "
	Nov 24 02:27:04 addons-831846 kubelet[1292]: I1124 02:27:04.728590    1292 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4mc24\" (UniqueName: \"kubernetes.io/projected/1ad45499-3b79-487f-bd9a-bdea8daf9625-kube-api-access-4mc24\") on node \"addons-831846\" DevicePath \"\""
	Nov 24 02:27:04 addons-831846 kubelet[1292]: I1124 02:27:04.728616    1292 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ad45499-3b79-487f-bd9a-bdea8daf9625-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "1ad45499-3b79-487f-bd9a-bdea8daf9625" (UID: "1ad45499-3b79-487f-bd9a-bdea8daf9625"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 24 02:27:04 addons-831846 kubelet[1292]: I1124 02:27:04.731363    1292 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^0bd531c2-c8dd-11f0-9c04-72a795e98d3c" (OuterVolumeSpecName: "task-pv-storage") pod "1ad45499-3b79-487f-bd9a-bdea8daf9625" (UID: "1ad45499-3b79-487f-bd9a-bdea8daf9625"). InnerVolumeSpecName "pvc-88c4c6a7-eb14-4505-9d95-1780c07d2a92". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 24 02:27:04 addons-831846 kubelet[1292]: I1124 02:27:04.829606    1292 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1ad45499-3b79-487f-bd9a-bdea8daf9625-gcp-creds\") on node \"addons-831846\" DevicePath \"\""
	Nov 24 02:27:04 addons-831846 kubelet[1292]: I1124 02:27:04.829650    1292 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-88c4c6a7-eb14-4505-9d95-1780c07d2a92\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0bd531c2-c8dd-11f0-9c04-72a795e98d3c\") on node \"addons-831846\" "
	Nov 24 02:27:04 addons-831846 kubelet[1292]: I1124 02:27:04.834489    1292 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-88c4c6a7-eb14-4505-9d95-1780c07d2a92" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^0bd531c2-c8dd-11f0-9c04-72a795e98d3c") on node "addons-831846"
	Nov 24 02:27:04 addons-831846 kubelet[1292]: I1124 02:27:04.930746    1292 reconciler_common.go:299] "Volume detached for volume \"pvc-88c4c6a7-eb14-4505-9d95-1780c07d2a92\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^0bd531c2-c8dd-11f0-9c04-72a795e98d3c\") on node \"addons-831846\" DevicePath \"\""
	Nov 24 02:27:04 addons-831846 kubelet[1292]: I1124 02:27:04.942661    1292 scope.go:117] "RemoveContainer" containerID="2a15c67791bb9ec30ec38f1bdab04fadeaf7e7bdfb19ab2d2d1e005e57bb7dc1"
	Nov 24 02:27:04 addons-831846 kubelet[1292]: I1124 02:27:04.951848    1292 scope.go:117] "RemoveContainer" containerID="2a15c67791bb9ec30ec38f1bdab04fadeaf7e7bdfb19ab2d2d1e005e57bb7dc1"
	Nov 24 02:27:04 addons-831846 kubelet[1292]: E1124 02:27:04.952234    1292 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a15c67791bb9ec30ec38f1bdab04fadeaf7e7bdfb19ab2d2d1e005e57bb7dc1\": container with ID starting with 2a15c67791bb9ec30ec38f1bdab04fadeaf7e7bdfb19ab2d2d1e005e57bb7dc1 not found: ID does not exist" containerID="2a15c67791bb9ec30ec38f1bdab04fadeaf7e7bdfb19ab2d2d1e005e57bb7dc1"
	Nov 24 02:27:04 addons-831846 kubelet[1292]: I1124 02:27:04.952280    1292 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a15c67791bb9ec30ec38f1bdab04fadeaf7e7bdfb19ab2d2d1e005e57bb7dc1"} err="failed to get container status \"2a15c67791bb9ec30ec38f1bdab04fadeaf7e7bdfb19ab2d2d1e005e57bb7dc1\": rpc error: code = NotFound desc = could not find container \"2a15c67791bb9ec30ec38f1bdab04fadeaf7e7bdfb19ab2d2d1e005e57bb7dc1\": container with ID starting with 2a15c67791bb9ec30ec38f1bdab04fadeaf7e7bdfb19ab2d2d1e005e57bb7dc1 not found: ID does not exist"
	Nov 24 02:27:05 addons-831846 kubelet[1292]: I1124 02:27:05.423123    1292 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ad45499-3b79-487f-bd9a-bdea8daf9625" path="/var/lib/kubelet/pods/1ad45499-3b79-487f-bd9a-bdea8daf9625/volumes"
	Nov 24 02:27:13 addons-831846 kubelet[1292]: I1124 02:27:13.417192    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-qnxkh" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:27:16 addons-831846 kubelet[1292]: I1124 02:27:16.416865    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6f6fp" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:27:25 addons-831846 kubelet[1292]: I1124 02:27:25.416763    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gf6tr" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:27:41 addons-831846 kubelet[1292]: E1124 02:27:41.175550    1292 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-h45vm" podUID="8eac9d40-11df-4d1b-b4ed-05e91b6db498"
	Nov 24 02:27:51 addons-831846 kubelet[1292]: I1124 02:27:51.474929    1292 scope.go:117] "RemoveContainer" containerID="b5476c08ac8d10c14351c023c5135b17b0dcc3c09cd1cf3029a3afeb2b8c8c53"
	Nov 24 02:27:56 addons-831846 kubelet[1292]: I1124 02:27:56.148592    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-h45vm" podStartSLOduration=176.98218822 podStartE2EDuration="2m58.14857084s" podCreationTimestamp="2025-11-24 02:24:58 +0000 UTC" firstStartedPulling="2025-11-24 02:27:54.44668905 +0000 UTC m=+183.117401188" lastFinishedPulling="2025-11-24 02:27:55.613071666 +0000 UTC m=+184.283783808" observedRunningTime="2025-11-24 02:27:56.147979886 +0000 UTC m=+184.818692063" watchObservedRunningTime="2025-11-24 02:27:56.14857084 +0000 UTC m=+184.819282999"
	Nov 24 02:28:14 addons-831846 kubelet[1292]: I1124 02:28:14.416545    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-qnxkh" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:28:28 addons-831846 kubelet[1292]: I1124 02:28:28.416912    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gf6tr" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:28:31 addons-831846 kubelet[1292]: I1124 02:28:31.417706    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6f6fp" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:29:03 addons-831846 kubelet[1292]: I1124 02:29:03.619112    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh6vz\" (UniqueName: \"kubernetes.io/projected/e3d0e416-4d4f-4429-a844-ba56182ca85f-kube-api-access-bh6vz\") pod \"hello-world-app-5d498dc89-m97s5\" (UID: \"e3d0e416-4d4f-4429-a844-ba56182ca85f\") " pod="default/hello-world-app-5d498dc89-m97s5"
	Nov 24 02:29:03 addons-831846 kubelet[1292]: I1124 02:29:03.619198    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e3d0e416-4d4f-4429-a844-ba56182ca85f-gcp-creds\") pod \"hello-world-app-5d498dc89-m97s5\" (UID: \"e3d0e416-4d4f-4429-a844-ba56182ca85f\") " pod="default/hello-world-app-5d498dc89-m97s5"
	
	
	==> storage-provisioner [6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03] <==
	W1124 02:28:39.569033       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:41.571443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:41.575999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:43.579061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:43.582701       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:45.585705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:45.588958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:47.591362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:47.595734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:49.598404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:49.601755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:51.604256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:51.609149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:53.611552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:53.619150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:55.622248       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:55.625723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:57.627932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:57.631164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:59.633856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:28:59.637289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:29:01.640168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:29:01.643821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:29:03.647406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:29:03.650981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-831846 -n addons-831846
helpers_test.go:269: (dbg) Run:  kubectl --context addons-831846 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-lkj84 ingress-nginx-admission-patch-sd9fv
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-831846 describe pod ingress-nginx-admission-create-lkj84 ingress-nginx-admission-patch-sd9fv
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-831846 describe pod ingress-nginx-admission-create-lkj84 ingress-nginx-admission-patch-sd9fv: exit status 1 (53.458731ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lkj84" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sd9fv" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-831846 describe pod ingress-nginx-admission-create-lkj84 ingress-nginx-admission-patch-sd9fv: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (242.349333ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:29:05.866594  365043 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:29:05.866822  365043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:29:05.866833  365043 out.go:374] Setting ErrFile to fd 2...
	I1124 02:29:05.866837  365043 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:29:05.867038  365043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:29:05.867295  365043 mustload.go:66] Loading cluster: addons-831846
	I1124 02:29:05.867578  365043 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:29:05.867594  365043 addons.go:622] checking whether the cluster is paused
	I1124 02:29:05.867668  365043 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:29:05.867680  365043 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:29:05.868039  365043 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:29:05.885225  365043 ssh_runner.go:195] Run: systemctl --version
	I1124 02:29:05.885281  365043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:29:05.901945  365043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:29:05.998225  365043 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:29:05.998297  365043 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:29:06.027751  365043 cri.go:89] found id: "f299b32d7aec3f1571d6e919ad1fa242ce5ee8cda5ad14db233864246fe657b7"
	I1124 02:29:06.027780  365043 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:29:06.027786  365043 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:29:06.027790  365043 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:29:06.027797  365043 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:29:06.027802  365043 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:29:06.027805  365043 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:29:06.027810  365043 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:29:06.027814  365043 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:29:06.027827  365043 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:29:06.027835  365043 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:29:06.027840  365043 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:29:06.027848  365043 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:29:06.027852  365043 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:29:06.027855  365043 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:29:06.027866  365043 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:29:06.027874  365043 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:29:06.027939  365043 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:29:06.027951  365043 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:29:06.027955  365043 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:29:06.027960  365043 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:29:06.027964  365043 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:29:06.027971  365043 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:29:06.027974  365043 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:29:06.027977  365043 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:29:06.027980  365043 cri.go:89] found id: ""
	I1124 02:29:06.028031  365043 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:29:06.042203  365043 out.go:203] 
	W1124 02:29:06.043305  365043 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:29:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:29:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:29:06.043321  365043 out.go:285] * 
	* 
	W1124 02:29:06.047207  365043 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:29:06.048467  365043 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable ingress --alsologtostderr -v=1: exit status 11 (237.756103ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:29:06.108927  365104 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:29:06.109179  365104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:29:06.109188  365104 out.go:374] Setting ErrFile to fd 2...
	I1124 02:29:06.109192  365104 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:29:06.109413  365104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:29:06.109755  365104 mustload.go:66] Loading cluster: addons-831846
	I1124 02:29:06.110100  365104 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:29:06.110123  365104 addons.go:622] checking whether the cluster is paused
	I1124 02:29:06.110231  365104 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:29:06.110249  365104 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:29:06.110659  365104 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:29:06.127581  365104 ssh_runner.go:195] Run: systemctl --version
	I1124 02:29:06.127637  365104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:29:06.143775  365104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:29:06.239819  365104 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:29:06.239882  365104 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:29:06.266866  365104 cri.go:89] found id: "f299b32d7aec3f1571d6e919ad1fa242ce5ee8cda5ad14db233864246fe657b7"
	I1124 02:29:06.266905  365104 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:29:06.266912  365104 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:29:06.266918  365104 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:29:06.266922  365104 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:29:06.266928  365104 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:29:06.266932  365104 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:29:06.266937  365104 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:29:06.266942  365104 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:29:06.266951  365104 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:29:06.266959  365104 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:29:06.266970  365104 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:29:06.266978  365104 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:29:06.266991  365104 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:29:06.266999  365104 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:29:06.267011  365104 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:29:06.267016  365104 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:29:06.267021  365104 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:29:06.267024  365104 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:29:06.267029  365104 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:29:06.267034  365104 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:29:06.267040  365104 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:29:06.267048  365104 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:29:06.267053  365104 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:29:06.267058  365104 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:29:06.267066  365104 cri.go:89] found id: ""
	I1124 02:29:06.267107  365104 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:29:06.280456  365104 out.go:203] 
	W1124 02:29:06.281612  365104 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:29:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:29:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:29:06.281638  365104 out.go:285] * 
	* 
	W1124 02:29:06.285991  365104 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:29:06.287113  365104 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-465gm" [ac41c542-1302-4794-9dd2-f4e8a3aeb0f0] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003467053s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (247.390541ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:26:48.580657  361978 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:26:48.580937  361978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:48.580946  361978 out.go:374] Setting ErrFile to fd 2...
	I1124 02:26:48.580950  361978 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:48.581118  361978 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:26:48.581374  361978 mustload.go:66] Loading cluster: addons-831846
	I1124 02:26:48.581722  361978 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:48.581738  361978 addons.go:622] checking whether the cluster is paused
	I1124 02:26:48.581847  361978 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:48.581862  361978 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:26:48.582218  361978 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:26:48.600249  361978 ssh_runner.go:195] Run: systemctl --version
	I1124 02:26:48.600308  361978 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:26:48.617532  361978 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:26:48.714571  361978 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:26:48.714653  361978 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:26:48.743645  361978 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:26:48.743680  361978 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:26:48.743685  361978 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:26:48.743691  361978 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:26:48.743696  361978 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:26:48.743701  361978 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:26:48.743706  361978 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:26:48.743710  361978 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:26:48.743715  361978 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:26:48.743724  361978 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:26:48.743733  361978 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:26:48.743738  361978 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:26:48.743745  361978 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:26:48.743750  361978 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:26:48.743757  361978 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:26:48.743772  361978 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:26:48.743782  361978 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:26:48.743788  361978 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:26:48.743793  361978 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:26:48.743797  361978 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:26:48.743816  361978 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:26:48.743821  361978 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:26:48.743828  361978 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:26:48.743833  361978 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:26:48.743840  361978 cri.go:89] found id: ""
	I1124 02:26:48.743903  361978 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:26:48.758171  361978 out.go:203] 
	W1124 02:26:48.759204  361978 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:26:48.759225  361978 out.go:285] * 
	* 
	W1124 02:26:48.763602  361978 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:26:48.764870  361978 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.364398ms
I1124 02:26:25.652128  349078 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1124 02:26:25.652155  349078 kapi.go:107] duration metric: took 3.783668ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-jmkg5" [a7773bc3-3047-45ae-ac07-238fe7a6282f] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.00265981s
addons_test.go:463: (dbg) Run:  kubectl --context addons-831846 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (243.581306ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:26:30.764855  359682 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:26:30.764981  359682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:30.764990  359682 out.go:374] Setting ErrFile to fd 2...
	I1124 02:26:30.764995  359682 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:30.765191  359682 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:26:30.765543  359682 mustload.go:66] Loading cluster: addons-831846
	I1124 02:26:30.765897  359682 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:30.765913  359682 addons.go:622] checking whether the cluster is paused
	I1124 02:26:30.765995  359682 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:30.766007  359682 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:26:30.766360  359682 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:26:30.785105  359682 ssh_runner.go:195] Run: systemctl --version
	I1124 02:26:30.785185  359682 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:26:30.802374  359682 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:26:30.900270  359682 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:26:30.900335  359682 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:26:30.928482  359682 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:26:30.928501  359682 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:26:30.928505  359682 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:26:30.928508  359682 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:26:30.928511  359682 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:26:30.928514  359682 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:26:30.928517  359682 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:26:30.928522  359682 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:26:30.928527  359682 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:26:30.928539  359682 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:26:30.928549  359682 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:26:30.928554  359682 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:26:30.928562  359682 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:26:30.928566  359682 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:26:30.928572  359682 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:26:30.928586  359682 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:26:30.928593  359682 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:26:30.928598  359682 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:26:30.928601  359682 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:26:30.928604  359682 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:26:30.928610  359682 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:26:30.928614  359682 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:26:30.928619  359682 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:26:30.928628  359682 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:26:30.928633  359682 cri.go:89] found id: ""
	I1124 02:26:30.928676  359682 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:26:30.942340  359682 out.go:203] 
	W1124 02:26:30.943336  359682 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:26:30.943351  359682 out.go:285] * 
	* 
	W1124 02:26:30.947249  359682 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:26:30.948462  359682 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (40.11s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1124 02:26:25.648397  349078 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.811785ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-831846 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-831846 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [80a2a4ef-b7ff-4cd2-b574-c1ab96768281] Pending
helpers_test.go:352: "task-pv-pod" [80a2a4ef-b7ff-4cd2-b574-c1ab96768281] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [80a2a4ef-b7ff-4cd2-b574-c1ab96768281] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.00236631s
addons_test.go:572: (dbg) Run:  kubectl --context addons-831846 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-831846 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-831846 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-831846 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-831846 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-831846 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-831846 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [1ad45499-3b79-487f-bd9a-bdea8daf9625] Pending
helpers_test.go:352: "task-pv-pod-restore" [1ad45499-3b79-487f-bd9a-bdea8daf9625] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [1ad45499-3b79-487f-bd9a-bdea8daf9625] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.002776385s
addons_test.go:614: (dbg) Run:  kubectl --context addons-831846 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-831846 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-831846 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (241.188128ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:27:05.330654  362646 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:27:05.330753  362646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:27:05.330765  362646 out.go:374] Setting ErrFile to fd 2...
	I1124 02:27:05.330772  362646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:27:05.330989  362646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:27:05.331261  362646 mustload.go:66] Loading cluster: addons-831846
	I1124 02:27:05.331574  362646 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:27:05.331590  362646 addons.go:622] checking whether the cluster is paused
	I1124 02:27:05.331669  362646 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:27:05.331680  362646 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:27:05.332059  362646 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:27:05.349844  362646 ssh_runner.go:195] Run: systemctl --version
	I1124 02:27:05.349902  362646 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:27:05.367404  362646 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:27:05.463878  362646 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:27:05.463976  362646 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:27:05.493072  362646 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:27:05.493093  362646 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:27:05.493097  362646 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:27:05.493101  362646 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:27:05.493104  362646 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:27:05.493108  362646 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:27:05.493111  362646 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:27:05.493113  362646 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:27:05.493116  362646 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:27:05.493122  362646 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:27:05.493124  362646 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:27:05.493128  362646 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:27:05.493131  362646 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:27:05.493134  362646 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:27:05.493137  362646 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:27:05.493142  362646 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:27:05.493145  362646 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:27:05.493149  362646 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:27:05.493152  362646 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:27:05.493155  362646 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:27:05.493164  362646 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:27:05.493169  362646 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:27:05.493172  362646 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:27:05.493175  362646 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:27:05.493181  362646 cri.go:89] found id: ""
	I1124 02:27:05.493214  362646 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:27:05.506307  362646 out.go:203] 
	W1124 02:27:05.507318  362646 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:27:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:27:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:27:05.507347  362646 out.go:285] * 
	* 
	W1124 02:27:05.511281  362646 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:27:05.512408  362646 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (240.044873ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:27:05.571522  362728 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:27:05.571752  362728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:27:05.571760  362728 out.go:374] Setting ErrFile to fd 2...
	I1124 02:27:05.571764  362728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:27:05.571985  362728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:27:05.572276  362728 mustload.go:66] Loading cluster: addons-831846
	I1124 02:27:05.572594  362728 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:27:05.572610  362728 addons.go:622] checking whether the cluster is paused
	I1124 02:27:05.572691  362728 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:27:05.572703  362728 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:27:05.573083  362728 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:27:05.589869  362728 ssh_runner.go:195] Run: systemctl --version
	I1124 02:27:05.589948  362728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:27:05.606079  362728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:27:05.704233  362728 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:27:05.704312  362728 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:27:05.732632  362728 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:27:05.732663  362728 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:27:05.732668  362728 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:27:05.732671  362728 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:27:05.732674  362728 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:27:05.732678  362728 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:27:05.732681  362728 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:27:05.732684  362728 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:27:05.732687  362728 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:27:05.732704  362728 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:27:05.732709  362728 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:27:05.732714  362728 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:27:05.732719  362728 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:27:05.732723  362728 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:27:05.732728  362728 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:27:05.732741  362728 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:27:05.732747  362728 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:27:05.732752  362728 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:27:05.732755  362728 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:27:05.732757  362728 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:27:05.732760  362728 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:27:05.732763  362728 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:27:05.732766  362728 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:27:05.732769  362728 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:27:05.732772  362728 cri.go:89] found id: ""
	I1124 02:27:05.732841  362728 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:27:05.747018  362728 out.go:203] 
	W1124 02:27:05.748030  362728 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:27:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:27:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:27:05.748045  362728 out.go:285] * 
	* 
	W1124 02:27:05.751911  362728 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:27:05.753127  362728 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (40.11s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-831846 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-831846 --alsologtostderr -v=1: exit status 11 (250.693516ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:26:25.706100  358785 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:26:25.706352  358785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:25.706360  358785 out.go:374] Setting ErrFile to fd 2...
	I1124 02:26:25.706365  358785 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:25.706577  358785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:26:25.706837  358785 mustload.go:66] Loading cluster: addons-831846
	I1124 02:26:25.707184  358785 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:25.707207  358785 addons.go:622] checking whether the cluster is paused
	I1124 02:26:25.707346  358785 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:25.707365  358785 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:26:25.707990  358785 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:26:25.726432  358785 ssh_runner.go:195] Run: systemctl --version
	I1124 02:26:25.726483  358785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:26:25.743795  358785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:26:25.843922  358785 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:26:25.844013  358785 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:26:25.872396  358785 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:26:25.872416  358785 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:26:25.872420  358785 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:26:25.872424  358785 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:26:25.872426  358785 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:26:25.872430  358785 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:26:25.872432  358785 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:26:25.872435  358785 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:26:25.872438  358785 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:26:25.872444  358785 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:26:25.872452  358785 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:26:25.872456  358785 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:26:25.872462  358785 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:26:25.872467  358785 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:26:25.872471  358785 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:26:25.872482  358785 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:26:25.872488  358785 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:26:25.872493  358785 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:26:25.872496  358785 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:26:25.872499  358785 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:26:25.872502  358785 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:26:25.872505  358785 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:26:25.872508  358785 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:26:25.872511  358785 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:26:25.872514  358785 cri.go:89] found id: ""
	I1124 02:26:25.872555  358785 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:26:25.885659  358785 out.go:203] 
	W1124 02:26:25.886914  358785 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:26:25.886951  358785 out.go:285] * 
	* 
	W1124 02:26:25.890872  358785 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:26:25.892105  358785 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-831846 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-831846
helpers_test.go:243: (dbg) docker inspect addons-831846:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2bbbb8de094b319ef64e68dbe73cf0d5dec3af4c1f995fbfa37305cc73d2b816",
	        "Created": "2025-11-24T02:24:35.441680908Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 351085,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T02:24:35.470586249Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/2bbbb8de094b319ef64e68dbe73cf0d5dec3af4c1f995fbfa37305cc73d2b816/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2bbbb8de094b319ef64e68dbe73cf0d5dec3af4c1f995fbfa37305cc73d2b816/hostname",
	        "HostsPath": "/var/lib/docker/containers/2bbbb8de094b319ef64e68dbe73cf0d5dec3af4c1f995fbfa37305cc73d2b816/hosts",
	        "LogPath": "/var/lib/docker/containers/2bbbb8de094b319ef64e68dbe73cf0d5dec3af4c1f995fbfa37305cc73d2b816/2bbbb8de094b319ef64e68dbe73cf0d5dec3af4c1f995fbfa37305cc73d2b816-json.log",
	        "Name": "/addons-831846",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-831846:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-831846",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2bbbb8de094b319ef64e68dbe73cf0d5dec3af4c1f995fbfa37305cc73d2b816",
	                "LowerDir": "/var/lib/docker/overlay2/d83c5f0438e590d391189509de54f1d798e2e18ff41b633bb43cbbad798581f0-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d83c5f0438e590d391189509de54f1d798e2e18ff41b633bb43cbbad798581f0/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d83c5f0438e590d391189509de54f1d798e2e18ff41b633bb43cbbad798581f0/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d83c5f0438e590d391189509de54f1d798e2e18ff41b633bb43cbbad798581f0/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-831846",
	                "Source": "/var/lib/docker/volumes/addons-831846/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-831846",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-831846",
	                "name.minikube.sigs.k8s.io": "addons-831846",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0d53dbb891f5d477aab1e91e68d700cb2edce62f5f6860fb4e3e5b9d6f0dae7e",
	            "SandboxKey": "/var/run/docker/netns/0d53dbb891f5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-831846": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "dc74d17b046a1b9232c65675579012ae6622be9ecbf5d337b28a0d3bb7d576bf",
	                    "EndpointID": "be205d1bef46584e2dcae24a84f245d522f5104e6b54999e1d0f023b2b15ffcc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "42:bb:9c:df:a5:fd",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-831846",
	                        "2bbbb8de094b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-831846 -n addons-831846
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-831846 logs -n 25: (1.064340464s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-539155 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-539155   │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:24 UTC │
	│ delete  │ -p download-only-539155                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-539155   │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:24 UTC │
	│ start   │ -o=json --download-only -p download-only-550393 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-550393   │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:24 UTC │
	│ delete  │ -p download-only-550393                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-550393   │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:24 UTC │
	│ delete  │ -p download-only-539155                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-539155   │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:24 UTC │
	│ delete  │ -p download-only-550393                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-550393   │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:24 UTC │
	│ start   │ --download-only -p download-docker-371720 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-371720 │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │                     │
	│ delete  │ -p download-docker-371720                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-371720 │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:24 UTC │
	│ start   │ --download-only -p binary-mirror-907926 --alsologtostderr --binary-mirror http://127.0.0.1:38615 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-907926   │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │                     │
	│ delete  │ -p binary-mirror-907926                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-907926   │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:24 UTC │
	│ addons  │ enable dashboard -p addons-831846                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-831846          │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │                     │
	│ addons  │ disable dashboard -p addons-831846                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-831846          │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │                     │
	│ start   │ -p addons-831846 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-831846          │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:26 UTC │
	│ addons  │ addons-831846 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-831846          │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ addons  │ addons-831846 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-831846          │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	│ addons  │ enable headlamp -p addons-831846 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-831846          │ jenkins │ v1.37.0 │ 24 Nov 25 02:26 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:24:12
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:24:12.497235  350425 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:24:12.497486  350425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:24:12.497494  350425 out.go:374] Setting ErrFile to fd 2...
	I1124 02:24:12.497498  350425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:24:12.497728  350425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:24:12.498234  350425 out.go:368] Setting JSON to false
	I1124 02:24:12.499079  350425 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3999,"bootTime":1763947053,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:24:12.499129  350425 start.go:143] virtualization: kvm guest
	I1124 02:24:12.500474  350425 out.go:179] * [addons-831846] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:24:12.501446  350425 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:24:12.501444  350425 notify.go:221] Checking for updates...
	I1124 02:24:12.502502  350425 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:24:12.503904  350425 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 02:24:12.504959  350425 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 02:24:12.505861  350425 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:24:12.506741  350425 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:24:12.507752  350425 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:24:12.530151  350425 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:24:12.530306  350425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:24:12.586274  350425 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 02:24:12.576722653 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:24:12.586379  350425 docker.go:319] overlay module found
	I1124 02:24:12.588276  350425 out.go:179] * Using the docker driver based on user configuration
	I1124 02:24:12.589187  350425 start.go:309] selected driver: docker
	I1124 02:24:12.589199  350425 start.go:927] validating driver "docker" against <nil>
	I1124 02:24:12.589210  350425 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:24:12.589744  350425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:24:12.641105  350425 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 02:24:12.632279885 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:24:12.641281  350425 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 02:24:12.641496  350425 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 02:24:12.642800  350425 out.go:179] * Using Docker driver with root privileges
	I1124 02:24:12.643671  350425 cni.go:84] Creating CNI manager for ""
	I1124 02:24:12.643739  350425 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 02:24:12.643750  350425 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 02:24:12.643809  350425 start.go:353] cluster config:
	{Name:addons-831846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-831846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1124 02:24:12.644981  350425 out.go:179] * Starting "addons-831846" primary control-plane node in "addons-831846" cluster
	I1124 02:24:12.645847  350425 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 02:24:12.646844  350425 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 02:24:12.647709  350425 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 02:24:12.647737  350425 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 02:24:12.647745  350425 cache.go:65] Caching tarball of preloaded images
	I1124 02:24:12.647805  350425 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 02:24:12.647826  350425 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 02:24:12.647834  350425 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 02:24:12.648151  350425 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/config.json ...
	I1124 02:24:12.648175  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/config.json: {Name:mk6c046471a659c96204a53c6d5135384c43c9b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:12.663382  350425 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 to local cache
	I1124 02:24:12.663508  350425 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory
	I1124 02:24:12.663527  350425 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory, skipping pull
	I1124 02:24:12.663532  350425 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in cache, skipping pull
	I1124 02:24:12.663543  350425 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 as a tarball
	I1124 02:24:12.663553  350425 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 from local cache
	I1124 02:24:24.353432  350425 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 from cached tarball
	I1124 02:24:24.353477  350425 cache.go:243] Successfully downloaded all kic artifacts
	I1124 02:24:24.353536  350425 start.go:360] acquireMachinesLock for addons-831846: {Name:mk78cdbea9ce09db40f77c1e12049c59393ec2d0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 02:24:24.353639  350425 start.go:364] duration metric: took 80.002µs to acquireMachinesLock for "addons-831846"
	I1124 02:24:24.353672  350425 start.go:93] Provisioning new machine with config: &{Name:addons-831846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-831846 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 02:24:24.353793  350425 start.go:125] createHost starting for "" (driver="docker")
	I1124 02:24:24.355301  350425 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1124 02:24:24.355609  350425 start.go:159] libmachine.API.Create for "addons-831846" (driver="docker")
	I1124 02:24:24.355651  350425 client.go:173] LocalClient.Create starting
	I1124 02:24:24.355778  350425 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 02:24:24.470843  350425 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 02:24:24.546856  350425 cli_runner.go:164] Run: docker network inspect addons-831846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 02:24:24.563514  350425 cli_runner.go:211] docker network inspect addons-831846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 02:24:24.563586  350425 network_create.go:284] running [docker network inspect addons-831846] to gather additional debugging logs...
	I1124 02:24:24.563608  350425 cli_runner.go:164] Run: docker network inspect addons-831846
	W1124 02:24:24.578139  350425 cli_runner.go:211] docker network inspect addons-831846 returned with exit code 1
	I1124 02:24:24.578160  350425 network_create.go:287] error running [docker network inspect addons-831846]: docker network inspect addons-831846: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-831846 not found
	I1124 02:24:24.578171  350425 network_create.go:289] output of [docker network inspect addons-831846]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-831846 not found
	
	** /stderr **
	I1124 02:24:24.578290  350425 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 02:24:24.593443  350425 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c8d000}
	I1124 02:24:24.593476  350425 network_create.go:124] attempt to create docker network addons-831846 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1124 02:24:24.593528  350425 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-831846 addons-831846
	I1124 02:24:24.635600  350425 network_create.go:108] docker network addons-831846 192.168.49.0/24 created
	I1124 02:24:24.635638  350425 kic.go:121] calculated static IP "192.168.49.2" for the "addons-831846" container
	I1124 02:24:24.635708  350425 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 02:24:24.652151  350425 cli_runner.go:164] Run: docker volume create addons-831846 --label name.minikube.sigs.k8s.io=addons-831846 --label created_by.minikube.sigs.k8s.io=true
	I1124 02:24:24.668288  350425 oci.go:103] Successfully created a docker volume addons-831846
	I1124 02:24:24.668349  350425 cli_runner.go:164] Run: docker run --rm --name addons-831846-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-831846 --entrypoint /usr/bin/test -v addons-831846:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 02:24:31.095422  350425 cli_runner.go:217] Completed: docker run --rm --name addons-831846-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-831846 --entrypoint /usr/bin/test -v addons-831846:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib: (6.427024618s)
	I1124 02:24:31.095464  350425 oci.go:107] Successfully prepared a docker volume addons-831846
	I1124 02:24:31.095529  350425 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 02:24:31.095547  350425 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 02:24:31.095617  350425 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-831846:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 02:24:35.371995  350425 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-831846:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (4.276327463s)
	I1124 02:24:35.372033  350425 kic.go:203] duration metric: took 4.276481705s to extract preloaded images to volume ...
	W1124 02:24:35.372120  350425 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 02:24:35.372164  350425 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 02:24:35.372207  350425 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 02:24:35.426685  350425 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-831846 --name addons-831846 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-831846 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-831846 --network addons-831846 --ip 192.168.49.2 --volume addons-831846:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 02:24:35.694411  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Running}}
	I1124 02:24:35.712736  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:35.729204  350425 cli_runner.go:164] Run: docker exec addons-831846 stat /var/lib/dpkg/alternatives/iptables
	I1124 02:24:35.781706  350425 oci.go:144] the created container "addons-831846" has a running status.
	I1124 02:24:35.781734  350425 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa...
	I1124 02:24:35.849039  350425 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 02:24:35.871269  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:35.887046  350425 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 02:24:35.887065  350425 kic_runner.go:114] Args: [docker exec --privileged addons-831846 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 02:24:35.926090  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:35.946965  350425 machine.go:94] provisionDockerMachine start ...
	I1124 02:24:35.947085  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:35.964671  350425 main.go:143] libmachine: Using SSH client type: native
	I1124 02:24:35.965013  350425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 02:24:35.965029  350425 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 02:24:35.965628  350425 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55204->127.0.0.1:33138: read: connection reset by peer
	I1124 02:24:39.102239  350425 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-831846
	
	I1124 02:24:39.102299  350425 ubuntu.go:182] provisioning hostname "addons-831846"
	I1124 02:24:39.102380  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:39.119443  350425 main.go:143] libmachine: Using SSH client type: native
	I1124 02:24:39.119641  350425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 02:24:39.119653  350425 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-831846 && echo "addons-831846" | sudo tee /etc/hostname
	I1124 02:24:39.262156  350425 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-831846
	
	I1124 02:24:39.262231  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:39.279171  350425 main.go:143] libmachine: Using SSH client type: native
	I1124 02:24:39.279383  350425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 02:24:39.279400  350425 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-831846' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-831846/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-831846' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 02:24:39.413737  350425 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 02:24:39.413765  350425 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 02:24:39.413783  350425 ubuntu.go:190] setting up certificates
	I1124 02:24:39.413805  350425 provision.go:84] configureAuth start
	I1124 02:24:39.413867  350425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-831846
	I1124 02:24:39.431315  350425 provision.go:143] copyHostCerts
	I1124 02:24:39.431383  350425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 02:24:39.431508  350425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 02:24:39.431597  350425 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 02:24:39.431662  350425 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.addons-831846 san=[127.0.0.1 192.168.49.2 addons-831846 localhost minikube]
	I1124 02:24:39.486632  350425 provision.go:177] copyRemoteCerts
	I1124 02:24:39.486696  350425 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 02:24:39.486744  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:39.503145  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:39.599217  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 02:24:39.617365  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 02:24:39.633463  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1671 bytes)
	I1124 02:24:39.649273  350425 provision.go:87] duration metric: took 235.454381ms to configureAuth
	I1124 02:24:39.649292  350425 ubuntu.go:206] setting minikube options for container-runtime
	I1124 02:24:39.649474  350425 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:24:39.649575  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:39.665826  350425 main.go:143] libmachine: Using SSH client type: native
	I1124 02:24:39.666058  350425 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I1124 02:24:39.666075  350425 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 02:24:39.933049  350425 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 02:24:39.933073  350425 machine.go:97] duration metric: took 3.986086694s to provisionDockerMachine
	I1124 02:24:39.933084  350425 client.go:176] duration metric: took 15.577422313s to LocalClient.Create
	I1124 02:24:39.933103  350425 start.go:167] duration metric: took 15.577495237s to libmachine.API.Create "addons-831846"
	I1124 02:24:39.933112  350425 start.go:293] postStartSetup for "addons-831846" (driver="docker")
	I1124 02:24:39.933125  350425 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 02:24:39.933184  350425 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 02:24:39.933221  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:39.949735  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:40.047426  350425 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 02:24:40.050684  350425 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 02:24:40.050708  350425 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 02:24:40.050718  350425 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 02:24:40.050766  350425 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 02:24:40.050788  350425 start.go:296] duration metric: took 117.668906ms for postStartSetup
	I1124 02:24:40.051062  350425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-831846
	I1124 02:24:40.069014  350425 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/config.json ...
	I1124 02:24:40.069320  350425 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 02:24:40.069369  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:40.085248  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:40.178140  350425 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 02:24:40.182372  350425 start.go:128] duration metric: took 15.82856204s to createHost
	I1124 02:24:40.182395  350425 start.go:83] releasing machines lock for "addons-831846", held for 15.828739912s
	I1124 02:24:40.182447  350425 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-831846
	I1124 02:24:40.198690  350425 ssh_runner.go:195] Run: cat /version.json
	I1124 02:24:40.198739  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:40.198771  350425 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 02:24:40.198853  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:40.216589  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:40.218201  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:40.360665  350425 ssh_runner.go:195] Run: systemctl --version
	I1124 02:24:40.366635  350425 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 02:24:40.398781  350425 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 02:24:40.403067  350425 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 02:24:40.403125  350425 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 02:24:40.427036  350425 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 02:24:40.427059  350425 start.go:496] detecting cgroup driver to use...
	I1124 02:24:40.427094  350425 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 02:24:40.427144  350425 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 02:24:40.441733  350425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 02:24:40.452743  350425 docker.go:218] disabling cri-docker service (if available) ...
	I1124 02:24:40.452795  350425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 02:24:40.467458  350425 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 02:24:40.482930  350425 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 02:24:40.561086  350425 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 02:24:40.644408  350425 docker.go:234] disabling docker service ...
	I1124 02:24:40.644481  350425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 02:24:40.661467  350425 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 02:24:40.672880  350425 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 02:24:40.750724  350425 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 02:24:40.828145  350425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 02:24:40.839099  350425 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 02:24:40.852076  350425 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 02:24:40.852135  350425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:24:40.861539  350425 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 02:24:40.861586  350425 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:24:40.869472  350425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:24:40.877366  350425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:24:40.885083  350425 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 02:24:40.892471  350425 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:24:40.900317  350425 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:24:40.912508  350425 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 02:24:40.920342  350425 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 02:24:40.927074  350425 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 02:24:40.933617  350425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 02:24:41.007380  350425 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 02:24:41.133750  350425 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 02:24:41.133821  350425 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 02:24:41.137861  350425 start.go:564] Will wait 60s for crictl version
	I1124 02:24:41.137946  350425 ssh_runner.go:195] Run: which crictl
	I1124 02:24:41.141343  350425 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 02:24:41.165003  350425 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 02:24:41.165089  350425 ssh_runner.go:195] Run: crio --version
	I1124 02:24:41.192796  350425 ssh_runner.go:195] Run: crio --version
	I1124 02:24:41.220832  350425 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 02:24:41.221820  350425 cli_runner.go:164] Run: docker network inspect addons-831846 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 02:24:41.238488  350425 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 02:24:41.242320  350425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 02:24:41.251830  350425 kubeadm.go:884] updating cluster {Name:addons-831846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-831846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 02:24:41.251965  350425 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 02:24:41.252025  350425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 02:24:41.281380  350425 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 02:24:41.281406  350425 crio.go:433] Images already preloaded, skipping extraction
	I1124 02:24:41.281452  350425 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 02:24:41.304835  350425 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 02:24:41.304856  350425 cache_images.go:86] Images are preloaded, skipping loading
	I1124 02:24:41.304865  350425 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1124 02:24:41.304973  350425 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-831846 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-831846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 02:24:41.305056  350425 ssh_runner.go:195] Run: crio config
	I1124 02:24:41.347938  350425 cni.go:84] Creating CNI manager for ""
	I1124 02:24:41.347963  350425 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 02:24:41.347984  350425 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 02:24:41.348018  350425 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-831846 NodeName:addons-831846 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 02:24:41.348188  350425 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-831846"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 02:24:41.348263  350425 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 02:24:41.355941  350425 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 02:24:41.355999  350425 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 02:24:41.363225  350425 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1124 02:24:41.375076  350425 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 02:24:41.388663  350425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1124 02:24:41.400010  350425 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 02:24:41.403229  350425 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 02:24:41.412233  350425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 02:24:41.487680  350425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 02:24:41.510732  350425 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846 for IP: 192.168.49.2
	I1124 02:24:41.510758  350425 certs.go:195] generating shared ca certs ...
	I1124 02:24:41.510779  350425 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.510926  350425 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 02:24:41.589454  350425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt ...
	I1124 02:24:41.589483  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt: {Name:mk41cc2f0def56fbfb754b3a8750ee8828de6e2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.589643  350425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key ...
	I1124 02:24:41.589659  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key: {Name:mked50c87c7e2fff49a6fd4196dbd325894e67f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.589762  350425 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 02:24:41.619716  350425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt ...
	I1124 02:24:41.619736  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt: {Name:mka5b8c2d9a65ddc1272b7582ce7c34dbde1e911 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.619856  350425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key ...
	I1124 02:24:41.619871  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key: {Name:mke60c8f9462f69b2c9cb21c9bff7faff5a9d7e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.619986  350425 certs.go:257] generating profile certs ...
	I1124 02:24:41.620060  350425 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.key
	I1124 02:24:41.620078  350425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt with IP's: []
	I1124 02:24:41.724858  350425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt ...
	I1124 02:24:41.724878  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: {Name:mk1db91edf22fe94153383e289f0e273481d0368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.725011  350425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.key ...
	I1124 02:24:41.725026  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.key: {Name:mk2459d5b0249dfebdb293d9657d24b961375413 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.725118  350425 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.key.510819de
	I1124 02:24:41.725141  350425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.crt.510819de with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1124 02:24:41.919121  350425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.crt.510819de ...
	I1124 02:24:41.919145  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.crt.510819de: {Name:mkc052818f6b009e7c1c266c9c3b79e5cc6d11b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.919399  350425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.key.510819de ...
	I1124 02:24:41.919429  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.key.510819de: {Name:mke211e616cbbf16fcfc66ec0a61e00c5f5953ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:41.919581  350425 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.crt.510819de -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.crt
	I1124 02:24:41.919705  350425 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.key.510819de -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.key
	I1124 02:24:41.919786  350425 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.key
	I1124 02:24:41.919815  350425 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.crt with IP's: []
	I1124 02:24:42.020937  350425 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.crt ...
	I1124 02:24:42.020964  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.crt: {Name:mk7dcbbc8a7679f60577e37ec6a554aa27393353 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:42.021119  350425 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.key ...
	I1124 02:24:42.021138  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.key: {Name:mk71e00b6ab7c8a71cce2cb67ede8d73f20238d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:42.021341  350425 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 02:24:42.021396  350425 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 02:24:42.021435  350425 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 02:24:42.021471  350425 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 02:24:42.022104  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 02:24:42.039730  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 02:24:42.056003  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 02:24:42.072093  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 02:24:42.087990  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 02:24:42.103927  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 02:24:42.119578  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 02:24:42.135144  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 02:24:42.150800  350425 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 02:24:42.168298  350425 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 02:24:42.179552  350425 ssh_runner.go:195] Run: openssl version
	I1124 02:24:42.185163  350425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 02:24:42.194858  350425 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 02:24:42.198218  350425 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 02:24:42.198259  350425 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 02:24:42.231361  350425 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 02:24:42.238932  350425 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 02:24:42.242193  350425 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 02:24:42.242231  350425 kubeadm.go:401] StartCluster: {Name:addons-831846 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-831846 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:24:42.242301  350425 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:24:42.242338  350425 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:24:42.267756  350425 cri.go:89] found id: ""
	I1124 02:24:42.267806  350425 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 02:24:42.274856  350425 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 02:24:42.281905  350425 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 02:24:42.281973  350425 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 02:24:42.288768  350425 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 02:24:42.288783  350425 kubeadm.go:158] found existing configuration files:
	
	I1124 02:24:42.288811  350425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 02:24:42.295927  350425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 02:24:42.295965  350425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 02:24:42.303010  350425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 02:24:42.310537  350425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 02:24:42.310597  350425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 02:24:42.318174  350425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 02:24:42.325492  350425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 02:24:42.325538  350425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 02:24:42.332744  350425 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 02:24:42.340343  350425 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 02:24:42.340395  350425 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 02:24:42.347608  350425 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 02:24:42.403120  350425 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 02:24:42.457353  350425 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 02:24:52.179997  350425 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 02:24:52.180082  350425 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 02:24:52.180224  350425 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 02:24:52.180294  350425 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 02:24:52.180337  350425 kubeadm.go:319] OS: Linux
	I1124 02:24:52.180403  350425 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 02:24:52.180475  350425 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 02:24:52.180562  350425 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 02:24:52.180635  350425 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 02:24:52.180719  350425 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 02:24:52.180795  350425 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 02:24:52.180871  350425 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 02:24:52.180942  350425 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 02:24:52.181039  350425 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 02:24:52.181196  350425 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 02:24:52.181317  350425 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 02:24:52.181412  350425 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 02:24:52.183320  350425 out.go:252]   - Generating certificates and keys ...
	I1124 02:24:52.183398  350425 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 02:24:52.183489  350425 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 02:24:52.183580  350425 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 02:24:52.183666  350425 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 02:24:52.183748  350425 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 02:24:52.183817  350425 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 02:24:52.183915  350425 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 02:24:52.184052  350425 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-831846 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 02:24:52.184116  350425 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 02:24:52.184254  350425 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-831846 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 02:24:52.184357  350425 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 02:24:52.184444  350425 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 02:24:52.184506  350425 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 02:24:52.184595  350425 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 02:24:52.184686  350425 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 02:24:52.184767  350425 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 02:24:52.184840  350425 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 02:24:52.184953  350425 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 02:24:52.185031  350425 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 02:24:52.185132  350425 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 02:24:52.185240  350425 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 02:24:52.186316  350425 out.go:252]   - Booting up control plane ...
	I1124 02:24:52.186410  350425 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 02:24:52.186491  350425 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 02:24:52.186573  350425 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 02:24:52.186684  350425 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 02:24:52.186818  350425 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 02:24:52.186942  350425 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 02:24:52.187052  350425 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 02:24:52.187113  350425 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 02:24:52.187267  350425 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 02:24:52.187410  350425 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 02:24:52.187504  350425 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001402195s
	I1124 02:24:52.187615  350425 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 02:24:52.187720  350425 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1124 02:24:52.187830  350425 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 02:24:52.187939  350425 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 02:24:52.188042  350425 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.12413053s
	I1124 02:24:52.188140  350425 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.958611031s
	I1124 02:24:52.188202  350425 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501124038s
	I1124 02:24:52.188297  350425 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 02:24:52.188406  350425 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 02:24:52.188456  350425 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 02:24:52.188640  350425 kubeadm.go:319] [mark-control-plane] Marking the node addons-831846 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 02:24:52.188690  350425 kubeadm.go:319] [bootstrap-token] Using token: ddy8ur.z7eb0digsuktkhl7
	I1124 02:24:52.189977  350425 out.go:252]   - Configuring RBAC rules ...
	I1124 02:24:52.190085  350425 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 02:24:52.190160  350425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 02:24:52.190287  350425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 02:24:52.190401  350425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 02:24:52.190526  350425 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 02:24:52.190635  350425 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 02:24:52.190762  350425 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 02:24:52.190823  350425 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 02:24:52.190908  350425 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 02:24:52.190921  350425 kubeadm.go:319] 
	I1124 02:24:52.191013  350425 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 02:24:52.191021  350425 kubeadm.go:319] 
	I1124 02:24:52.191115  350425 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 02:24:52.191123  350425 kubeadm.go:319] 
	I1124 02:24:52.191178  350425 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 02:24:52.191260  350425 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 02:24:52.191333  350425 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 02:24:52.191341  350425 kubeadm.go:319] 
	I1124 02:24:52.191412  350425 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 02:24:52.191420  350425 kubeadm.go:319] 
	I1124 02:24:52.191488  350425 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 02:24:52.191496  350425 kubeadm.go:319] 
	I1124 02:24:52.191559  350425 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 02:24:52.191622  350425 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 02:24:52.191685  350425 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 02:24:52.191691  350425 kubeadm.go:319] 
	I1124 02:24:52.191774  350425 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 02:24:52.191875  350425 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 02:24:52.191897  350425 kubeadm.go:319] 
	I1124 02:24:52.191999  350425 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ddy8ur.z7eb0digsuktkhl7 \
	I1124 02:24:52.192119  350425 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 02:24:52.192149  350425 kubeadm.go:319] 	--control-plane 
	I1124 02:24:52.192163  350425 kubeadm.go:319] 
	I1124 02:24:52.192267  350425 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 02:24:52.192275  350425 kubeadm.go:319] 
	I1124 02:24:52.192389  350425 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ddy8ur.z7eb0digsuktkhl7 \
	I1124 02:24:52.192539  350425 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 02:24:52.192567  350425 cni.go:84] Creating CNI manager for ""
	I1124 02:24:52.192577  350425 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 02:24:52.194307  350425 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 02:24:52.195231  350425 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 02:24:52.199378  350425 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 02:24:52.199393  350425 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 02:24:52.211671  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 02:24:52.399360  350425 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 02:24:52.399453  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:52.399469  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-831846 minikube.k8s.io/updated_at=2025_11_24T02_24_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=addons-831846 minikube.k8s.io/primary=true
	I1124 02:24:52.408673  350425 ops.go:34] apiserver oom_adj: -16
	I1124 02:24:52.469572  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:52.970554  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:53.470026  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:53.969909  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:54.469641  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:54.970899  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:55.470462  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:55.969601  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:56.470546  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:56.970238  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:57.469604  350425 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 02:24:57.531712  350425 kubeadm.go:1114] duration metric: took 5.132320273s to wait for elevateKubeSystemPrivileges
	I1124 02:24:57.531757  350425 kubeadm.go:403] duration metric: took 15.289526796s to StartCluster
	I1124 02:24:57.531782  350425 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:57.531943  350425 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 02:24:57.532482  350425 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 02:24:57.532721  350425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 02:24:57.532768  350425 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 02:24:57.532861  350425 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 02:24:57.533022  350425 addons.go:70] Setting yakd=true in profile "addons-831846"
	I1124 02:24:57.533034  350425 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-831846"
	I1124 02:24:57.533046  350425 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:24:57.533059  350425 addons.go:70] Setting registry-creds=true in profile "addons-831846"
	I1124 02:24:57.533066  350425 addons.go:70] Setting cloud-spanner=true in profile "addons-831846"
	I1124 02:24:57.533071  350425 addons.go:239] Setting addon registry-creds=true in "addons-831846"
	I1124 02:24:57.533068  350425 addons.go:70] Setting registry=true in profile "addons-831846"
	I1124 02:24:57.533083  350425 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-831846"
	I1124 02:24:57.533051  350425 addons.go:239] Setting addon yakd=true in "addons-831846"
	I1124 02:24:57.533099  350425 addons.go:70] Setting metrics-server=true in profile "addons-831846"
	I1124 02:24:57.533106  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533117  350425 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-831846"
	I1124 02:24:57.533120  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533121  350425 addons.go:70] Setting ingress-dns=true in profile "addons-831846"
	I1124 02:24:57.533129  350425 addons.go:70] Setting default-storageclass=true in profile "addons-831846"
	I1124 02:24:57.533136  350425 addons.go:70] Setting volcano=true in profile "addons-831846"
	I1124 02:24:57.533144  350425 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-831846"
	I1124 02:24:57.533147  350425 addons.go:239] Setting addon ingress-dns=true in "addons-831846"
	I1124 02:24:57.533151  350425 addons.go:239] Setting addon volcano=true in "addons-831846"
	I1124 02:24:57.533172  350425 addons.go:70] Setting gcp-auth=true in profile "addons-831846"
	I1124 02:24:57.533196  350425 mustload.go:66] Loading cluster: addons-831846
	I1124 02:24:57.533197  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533231  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533221  350425 addons.go:70] Setting storage-provisioner=true in profile "addons-831846"
	I1124 02:24:57.533257  350425 addons.go:239] Setting addon storage-provisioner=true in "addons-831846"
	I1124 02:24:57.533287  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533389  350425 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:24:57.533503  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533624  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533712  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533744  350425 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-831846"
	I1124 02:24:57.533780  350425 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-831846"
	I1124 02:24:57.533799  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533078  350425 addons.go:239] Setting addon cloud-spanner=true in "addons-831846"
	I1124 02:24:57.533951  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.534084  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.534406  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533121  350425 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-831846"
	I1124 02:24:57.534601  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.535057  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533061  350425 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-831846"
	I1124 02:24:57.535286  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.535780  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533025  350425 addons.go:70] Setting ingress=true in profile "addons-831846"
	I1124 02:24:57.535924  350425 addons.go:239] Setting addon ingress=true in "addons-831846"
	I1124 02:24:57.536013  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.536478  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.537619  350425 out.go:179] * Verifying Kubernetes components...
	I1124 02:24:57.533092  350425 addons.go:70] Setting inspektor-gadget=true in profile "addons-831846"
	I1124 02:24:57.537937  350425 addons.go:239] Setting addon inspektor-gadget=true in "addons-831846"
	I1124 02:24:57.537980  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.538436  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533716  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533727  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.533109  350425 addons.go:239] Setting addon metrics-server=true in "addons-831846"
	I1124 02:24:57.539117  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533736  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.541403  350425 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 02:24:57.533090  350425 addons.go:239] Setting addon registry=true in "addons-831846"
	I1124 02:24:57.541832  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533126  350425 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-831846"
	I1124 02:24:57.542098  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.533732  350425 addons.go:70] Setting volumesnapshots=true in profile "addons-831846"
	I1124 02:24:57.542989  350425 addons.go:239] Setting addon volumesnapshots=true in "addons-831846"
	I1124 02:24:57.543290  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.545543  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.547727  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.548471  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.549625  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.572660  350425 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-831846"
	I1124 02:24:57.572763  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.573289  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:24:57.588466  350425 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1124 02:24:57.589800  350425 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 02:24:57.589820  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 02:24:57.589878  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.590467  350425 addons.go:239] Setting addon default-storageclass=true in "addons-831846"
	I1124 02:24:57.590540  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.592502  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:24:57.595366  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	W1124 02:24:57.611091  350425 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 02:24:57.615421  350425 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 02:24:57.615559  350425 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 02:24:57.615632  350425 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 02:24:57.621396  350425 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 02:24:57.621418  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 02:24:57.621479  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.626512  350425 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 02:24:57.626555  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 02:24:57.626620  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.626709  350425 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 02:24:57.626755  350425 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 02:24:57.626841  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.628754  350425 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1124 02:24:57.629784  350425 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1124 02:24:57.630284  350425 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 02:24:57.630302  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 02:24:57.630370  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.630954  350425 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 02:24:57.631105  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 02:24:57.631201  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 02:24:57.631265  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.634706  350425 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 02:24:57.634707  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 02:24:57.635087  350425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 02:24:57.635107  350425 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 02:24:57.635156  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.636017  350425 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 02:24:57.636038  350425 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 02:24:57.636112  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.636647  350425 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 02:24:57.637044  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 02:24:57.637614  350425 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 02:24:57.637635  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 02:24:57.637692  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.643349  350425 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 02:24:57.643402  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 02:24:57.645354  350425 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 02:24:57.646386  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 02:24:57.646547  350425 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 02:24:57.646561  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 02:24:57.646636  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.650757  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 02:24:57.650956  350425 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 02:24:57.652265  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 02:24:57.652304  350425 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 02:24:57.652316  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 02:24:57.652369  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.654565  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 02:24:57.656242  350425 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 02:24:57.657250  350425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 02:24:57.657360  350425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 02:24:57.657479  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.660079  350425 out.go:179]   - Using image docker.io/busybox:stable
	I1124 02:24:57.661110  350425 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 02:24:57.662129  350425 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 02:24:57.663303  350425 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 02:24:57.663308  350425 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 02:24:57.663580  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 02:24:57.663938  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.668235  350425 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 02:24:57.668254  350425 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 02:24:57.668304  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.671548  350425 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 02:24:57.674405  350425 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 02:24:57.674425  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 02:24:57.674477  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:24:57.675125  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.677844  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.681856  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.693031  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.702034  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.710329  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.710739  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.712185  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.717918  350425 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 02:24:57.733860  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.736166  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.742045  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.742552  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.743510  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.745246  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.745785  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:24:57.779079  350425 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 02:24:57.845355  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 02:24:57.853586  350425 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 02:24:57.853608  350425 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 02:24:57.871911  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 02:24:57.874735  350425 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 02:24:57.874761  350425 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 02:24:57.884312  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 02:24:57.887655  350425 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 02:24:57.887677  350425 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 02:24:57.889707  350425 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 02:24:57.889725  350425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 02:24:57.901828  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 02:24:57.902991  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 02:24:57.911196  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 02:24:57.911307  350425 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 02:24:57.911320  350425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 02:24:57.917426  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 02:24:57.918457  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 02:24:57.923204  350425 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 02:24:57.923225  350425 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 02:24:57.923473  350425 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 02:24:57.923491  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 02:24:57.924359  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 02:24:57.929522  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 02:24:57.934981  350425 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 02:24:57.935003  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 02:24:57.939063  350425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 02:24:57.939083  350425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 02:24:57.954178  350425 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 02:24:57.954202  350425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 02:24:57.975801  350425 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 02:24:57.975827  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 02:24:57.979122  350425 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 02:24:57.979144  350425 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 02:24:57.992600  350425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 02:24:57.992622  350425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 02:24:58.009061  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 02:24:58.023069  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 02:24:58.031315  350425 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 02:24:58.031387  350425 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 02:24:58.042343  350425 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 02:24:58.042365  350425 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 02:24:58.062315  350425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 02:24:58.062411  350425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 02:24:58.095786  350425 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 02:24:58.095812  350425 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 02:24:58.102318  350425 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 02:24:58.102403  350425 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 02:24:58.113156  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 02:24:58.117111  350425 node_ready.go:35] waiting up to 6m0s for node "addons-831846" to be "Ready" ...
	I1124 02:24:58.117389  350425 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1124 02:24:58.130758  350425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 02:24:58.130778  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 02:24:58.163656  350425 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 02:24:58.163687  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 02:24:58.197458  350425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 02:24:58.197490  350425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 02:24:58.249202  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 02:24:58.275562  350425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 02:24:58.275599  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 02:24:58.318459  350425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 02:24:58.318487  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 02:24:58.356736  350425 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 02:24:58.356767  350425 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 02:24:58.401410  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 02:24:58.626429  350425 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-831846" context rescaled to 1 replicas
	I1124 02:24:59.062241  350425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.132681733s)
	I1124 02:24:59.062289  350425 addons.go:495] Verifying addon ingress=true in "addons-831846"
	I1124 02:24:59.062287  350425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.053188039s)
	I1124 02:24:59.062320  350425 addons.go:495] Verifying addon registry=true in "addons-831846"
	I1124 02:24:59.062388  350425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.039175109s)
	I1124 02:24:59.062470  350425 addons.go:495] Verifying addon metrics-server=true in "addons-831846"
	I1124 02:24:59.064731  350425 out.go:179] * Verifying ingress addon...
	I1124 02:24:59.064757  350425 out.go:179] * Verifying registry addon...
	I1124 02:24:59.064801  350425 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-831846 service yakd-dashboard -n yakd-dashboard
	
	I1124 02:24:59.067295  350425 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 02:24:59.067338  350425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 02:24:59.070308  350425 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 02:24:59.070324  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:24:59.070422  350425 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 02:24:59.070440  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:24:59.483619  350425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.234356382s)
	W1124 02:24:59.483732  350425 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 02:24:59.483770  350425 retry.go:31] will retry after 253.880977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 02:24:59.483771  350425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.082299898s)
	I1124 02:24:59.483819  350425 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-831846"
	I1124 02:24:59.485738  350425 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 02:24:59.487691  350425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 02:24:59.489749  350425 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 02:24:59.489774  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:24:59.569974  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:24:59.570177  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:24:59.738517  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 02:24:59.990462  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:00.091088  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:00.091323  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:00.120043  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:00.490486  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:00.591473  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:00.591689  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:00.991274  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:01.091486  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:01.091601  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:01.490922  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:01.569811  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:01.570017  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:01.990427  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:02.069941  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:02.070190  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:02.194057  350425 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.455487376s)
	I1124 02:25:02.490394  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:02.569607  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:02.569680  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:02.618911  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:02.990531  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:03.091197  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:03.091412  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:03.490997  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:03.570086  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:03.570249  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:03.990755  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:04.091728  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:04.091850  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:04.490963  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:04.569952  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:04.570091  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:04.619688  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:04.990352  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:05.090553  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:05.090624  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:05.204597  350425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 02:25:05.204678  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:25:05.222156  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:25:05.325356  350425 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 02:25:05.337389  350425 addons.go:239] Setting addon gcp-auth=true in "addons-831846"
	I1124 02:25:05.337445  350425 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:25:05.337822  350425 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:25:05.355243  350425 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 02:25:05.355289  350425 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:25:05.372195  350425 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:25:05.466809  350425 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 02:25:05.467786  350425 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 02:25:05.468917  350425 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 02:25:05.468936  350425 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 02:25:05.480978  350425 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 02:25:05.481003  350425 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 02:25:05.490904  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:05.493589  350425 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 02:25:05.493605  350425 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 02:25:05.505077  350425 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 02:25:05.570568  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:05.570724  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:05.791538  350425 addons.go:495] Verifying addon gcp-auth=true in "addons-831846"
	I1124 02:25:05.792626  350425 out.go:179] * Verifying gcp-auth addon...
	I1124 02:25:05.794186  350425 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 02:25:05.796313  350425 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 02:25:05.796326  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:05.990660  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:06.069721  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:06.069767  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:06.296465  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:06.491011  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:06.570027  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:06.570306  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:06.797443  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:06.990734  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:07.069849  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:07.070121  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:07.119720  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:07.296946  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:07.490082  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:07.570302  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:07.570456  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:07.796312  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:07.990857  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:08.069972  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:08.070172  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:08.297207  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:08.490700  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:08.569848  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:08.570087  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:08.797176  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:08.990926  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:09.070193  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:09.070269  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:09.120032  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:09.297102  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:09.490168  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:09.570100  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:09.570229  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:09.797334  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:09.990562  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:10.069480  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:10.069642  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:10.297585  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:10.490833  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:10.569840  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:10.570162  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:10.796988  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:10.990055  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:11.070294  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:11.070379  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:11.297241  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:11.490444  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:11.569598  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:11.569757  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:11.619516  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:11.796776  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:11.990176  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:12.070223  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:12.070434  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:12.297520  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:12.491133  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:12.570759  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:12.570790  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:12.797223  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:12.990675  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:13.070007  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:13.070190  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:13.297110  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:13.490380  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:13.569462  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:13.569715  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:13.796461  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:13.990675  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:14.069733  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:14.069743  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:14.119333  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:14.296503  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:14.490991  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:14.571002  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:14.571633  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:14.796867  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:14.990189  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:15.070241  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:15.070492  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:15.297221  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:15.490596  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:15.569495  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:15.569763  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:15.796171  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:15.990679  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:16.069772  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:16.069981  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:16.119664  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:16.296957  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:16.490391  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:16.570517  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:16.570673  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:16.796667  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:16.991271  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:17.070512  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:17.070611  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:17.296806  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:17.489871  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:17.569875  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:17.570129  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:17.797165  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:17.990535  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:18.069927  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:18.069980  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:18.119859  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:18.297338  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:18.490751  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:18.569611  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:18.569742  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:18.796288  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:18.990391  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:19.069581  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:19.069773  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:19.297228  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:19.490534  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:19.569783  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:19.570011  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:19.797399  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:19.990506  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:20.069592  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:20.069775  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:20.297005  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:20.490557  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:20.569765  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:20.569828  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:20.619553  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:20.796806  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:20.990009  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:21.070162  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:21.070421  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:21.297039  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:21.490217  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:21.570379  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:21.570511  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:21.796403  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:21.990861  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:22.070067  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:22.070307  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:22.297592  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:22.491216  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:22.570533  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:22.570689  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:22.619687  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:22.797458  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:22.990755  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:23.069903  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:23.070081  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:23.297074  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:23.490011  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:23.570014  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:23.570085  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:23.797074  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:23.990139  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:24.070157  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:24.070351  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:24.297249  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:24.490391  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:24.570357  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:24.570525  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:24.796410  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:24.990921  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:25.070579  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:25.070660  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:25.119458  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:25.296660  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:25.490987  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:25.569801  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:25.569990  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:25.796902  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:25.990127  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:26.070472  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:26.070591  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:26.300242  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:26.490473  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:26.569331  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:26.569494  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:26.796047  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:26.990397  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:27.069693  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:27.069862  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:27.119596  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:27.296977  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:27.490289  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:27.569533  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:27.569590  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:27.796728  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:27.990251  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:28.070501  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:28.070743  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:28.296899  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:28.490176  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:28.570134  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:28.570265  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:28.797159  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:28.990359  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:29.070605  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:29.070647  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:29.296729  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:29.489644  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:29.569615  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:29.569826  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:29.619507  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:29.796762  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:29.989968  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:30.070219  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:30.070434  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:30.296442  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:30.490734  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:30.569914  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:30.570058  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:30.797202  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:30.990350  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:31.070478  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:31.070540  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:31.296241  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:31.490243  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:31.570302  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:31.570326  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:31.797280  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:31.990455  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:32.069525  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:32.069710  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:32.119515  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:32.296984  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:32.490450  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:32.569692  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:32.569802  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:32.797341  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:32.990943  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:33.070534  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:33.070563  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:33.296766  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:33.489849  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:33.570075  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:33.570074  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:33.796665  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:33.990918  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:34.070092  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:34.070178  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:34.119950  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:34.297168  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:34.490409  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:34.570379  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:34.570441  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:34.796379  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:34.991237  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:35.070606  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:35.070763  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:35.296863  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:35.489951  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:35.570139  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:35.570144  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:35.796719  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:35.989857  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:36.070013  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:36.070128  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 02:25:36.120260  350425 node_ready.go:57] node "addons-831846" has "Ready":"False" status (will retry)
	I1124 02:25:36.296591  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:36.489931  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:36.569946  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:36.570019  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:36.796902  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:36.990208  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:37.070546  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:37.070740  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:37.297030  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:37.490348  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:37.569540  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:37.569720  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:37.797090  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:37.990614  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:38.069794  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:38.070001  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:38.298187  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:38.491718  350425 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 02:25:38.491739  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:38.594337  350425 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 02:25:38.594366  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:38.594910  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:38.621508  350425 node_ready.go:49] node "addons-831846" is "Ready"
	I1124 02:25:38.621543  350425 node_ready.go:38] duration metric: took 40.504389312s for node "addons-831846" to be "Ready" ...
	I1124 02:25:38.621564  350425 api_server.go:52] waiting for apiserver process to appear ...
	I1124 02:25:38.621625  350425 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 02:25:38.640077  350425 api_server.go:72] duration metric: took 41.107263444s to wait for apiserver process to appear ...
	I1124 02:25:38.640108  350425 api_server.go:88] waiting for apiserver healthz status ...
	I1124 02:25:38.640167  350425 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1124 02:25:38.645428  350425 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1124 02:25:38.646463  350425 api_server.go:141] control plane version: v1.34.1
	I1124 02:25:38.646491  350425 api_server.go:131] duration metric: took 6.374896ms to wait for apiserver health ...
	I1124 02:25:38.646505  350425 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 02:25:38.693488  350425 system_pods.go:59] 20 kube-system pods found
	I1124 02:25:38.693547  350425 system_pods.go:61] "amd-gpu-device-plugin-6f6fp" [f718fb18-20f0-4f35-931c-3a308c345e99] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 02:25:38.693561  350425 system_pods.go:61] "coredns-66bc5c9577-rdmxf" [89daa76d-6cb5-46b6-80c2-e6feea646c06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 02:25:38.693580  350425 system_pods.go:61] "csi-hostpath-attacher-0" [dfd1a64e-cd18-4f88-9ae8-933351bf5cff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 02:25:38.693589  350425 system_pods.go:61] "csi-hostpath-resizer-0" [afe0c71b-407a-426f-9323-d835b8f2e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 02:25:38.693599  350425 system_pods.go:61] "csi-hostpathplugin-lmkkf" [c8b973a0-b84c-4c2d-b283-00643dfccac7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 02:25:38.693606  350425 system_pods.go:61] "etcd-addons-831846" [bc9df316-a146-46c0-a791-a90e33b10de5] Running
	I1124 02:25:38.693610  350425 system_pods.go:61] "kindnet-8rv8j" [e51a1ca8-a9af-4c1f-bdaf-27b503467a22] Running
	I1124 02:25:38.693616  350425 system_pods.go:61] "kube-apiserver-addons-831846" [fb66712c-5e47-4256-906c-544eac5dbb55] Running
	I1124 02:25:38.693620  350425 system_pods.go:61] "kube-controller-manager-addons-831846" [dcc8ac48-94da-440d-aa7c-f5563637926a] Running
	I1124 02:25:38.693629  350425 system_pods.go:61] "kube-ingress-dns-minikube" [a8c07c0c-a689-4a36-98ed-b97c0d8c59e2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 02:25:38.693632  350425 system_pods.go:61] "kube-proxy-78b65" [78176f75-edaa-46ac-803c-1f08847b0345] Running
	I1124 02:25:38.693636  350425 system_pods.go:61] "kube-scheduler-addons-831846" [91ee0d85-a527-476c-bedd-b5ec3faa6ee8] Running
	I1124 02:25:38.693640  350425 system_pods.go:61] "metrics-server-85b7d694d7-jmkg5" [a7773bc3-3047-45ae-ac07-238fe7a6282f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 02:25:38.693650  350425 system_pods.go:61] "nvidia-device-plugin-daemonset-gf6tr" [59b3a9aa-53dd-4231-97bd-2015d666639c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 02:25:38.693661  350425 system_pods.go:61] "registry-6b586f9694-fmpk9" [f51b7a5d-73cd-404e-87db-f7b56c46e8fc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 02:25:38.693677  350425 system_pods.go:61] "registry-creds-764b6fb674-h45vm" [8eac9d40-11df-4d1b-b4ed-05e91b6db498] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 02:25:38.693688  350425 system_pods.go:61] "registry-proxy-qnxkh" [492c9528-7caf-47b5-86ce-62e1cf455391] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 02:25:38.693698  350425 system_pods.go:61] "snapshot-controller-7d9fbc56b8-5mjkq" [77bb303b-f029-49fb-bc4a-12059b110a5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:38.693708  350425 system_pods.go:61] "snapshot-controller-7d9fbc56b8-hhf7t" [cffe1923-8bb2-43b0-a27d-15c013c3e481] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:38.693715  350425 system_pods.go:61] "storage-provisioner" [2bb8d9da-7fe4-4c71-b117-512944dd1208] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 02:25:38.693721  350425 system_pods.go:74] duration metric: took 47.210583ms to wait for pod list to return data ...
	I1124 02:25:38.693732  350425 default_sa.go:34] waiting for default service account to be created ...
	I1124 02:25:38.695570  350425 default_sa.go:45] found service account: "default"
	I1124 02:25:38.695589  350425 default_sa.go:55] duration metric: took 1.852501ms for default service account to be created ...
	I1124 02:25:38.695598  350425 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 02:25:38.699495  350425 system_pods.go:86] 20 kube-system pods found
	I1124 02:25:38.699536  350425 system_pods.go:89] "amd-gpu-device-plugin-6f6fp" [f718fb18-20f0-4f35-931c-3a308c345e99] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 02:25:38.699548  350425 system_pods.go:89] "coredns-66bc5c9577-rdmxf" [89daa76d-6cb5-46b6-80c2-e6feea646c06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 02:25:38.699564  350425 system_pods.go:89] "csi-hostpath-attacher-0" [dfd1a64e-cd18-4f88-9ae8-933351bf5cff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 02:25:38.699578  350425 system_pods.go:89] "csi-hostpath-resizer-0" [afe0c71b-407a-426f-9323-d835b8f2e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 02:25:38.699588  350425 system_pods.go:89] "csi-hostpathplugin-lmkkf" [c8b973a0-b84c-4c2d-b283-00643dfccac7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 02:25:38.699595  350425 system_pods.go:89] "etcd-addons-831846" [bc9df316-a146-46c0-a791-a90e33b10de5] Running
	I1124 02:25:38.699606  350425 system_pods.go:89] "kindnet-8rv8j" [e51a1ca8-a9af-4c1f-bdaf-27b503467a22] Running
	I1124 02:25:38.699612  350425 system_pods.go:89] "kube-apiserver-addons-831846" [fb66712c-5e47-4256-906c-544eac5dbb55] Running
	I1124 02:25:38.699620  350425 system_pods.go:89] "kube-controller-manager-addons-831846" [dcc8ac48-94da-440d-aa7c-f5563637926a] Running
	I1124 02:25:38.699630  350425 system_pods.go:89] "kube-ingress-dns-minikube" [a8c07c0c-a689-4a36-98ed-b97c0d8c59e2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 02:25:38.699641  350425 system_pods.go:89] "kube-proxy-78b65" [78176f75-edaa-46ac-803c-1f08847b0345] Running
	I1124 02:25:38.699647  350425 system_pods.go:89] "kube-scheduler-addons-831846" [91ee0d85-a527-476c-bedd-b5ec3faa6ee8] Running
	I1124 02:25:38.699655  350425 system_pods.go:89] "metrics-server-85b7d694d7-jmkg5" [a7773bc3-3047-45ae-ac07-238fe7a6282f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 02:25:38.699669  350425 system_pods.go:89] "nvidia-device-plugin-daemonset-gf6tr" [59b3a9aa-53dd-4231-97bd-2015d666639c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 02:25:38.699680  350425 system_pods.go:89] "registry-6b586f9694-fmpk9" [f51b7a5d-73cd-404e-87db-f7b56c46e8fc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 02:25:38.699692  350425 system_pods.go:89] "registry-creds-764b6fb674-h45vm" [8eac9d40-11df-4d1b-b4ed-05e91b6db498] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 02:25:38.699703  350425 system_pods.go:89] "registry-proxy-qnxkh" [492c9528-7caf-47b5-86ce-62e1cf455391] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 02:25:38.699713  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5mjkq" [77bb303b-f029-49fb-bc4a-12059b110a5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:38.699727  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hhf7t" [cffe1923-8bb2-43b0-a27d-15c013c3e481] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:38.699735  350425 system_pods.go:89] "storage-provisioner" [2bb8d9da-7fe4-4c71-b117-512944dd1208] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 02:25:38.699754  350425 retry.go:31] will retry after 224.455967ms: missing components: kube-dns
	I1124 02:25:38.796190  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:38.928493  350425 system_pods.go:86] 20 kube-system pods found
	I1124 02:25:38.928535  350425 system_pods.go:89] "amd-gpu-device-plugin-6f6fp" [f718fb18-20f0-4f35-931c-3a308c345e99] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 02:25:38.928547  350425 system_pods.go:89] "coredns-66bc5c9577-rdmxf" [89daa76d-6cb5-46b6-80c2-e6feea646c06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 02:25:38.928558  350425 system_pods.go:89] "csi-hostpath-attacher-0" [dfd1a64e-cd18-4f88-9ae8-933351bf5cff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 02:25:38.928565  350425 system_pods.go:89] "csi-hostpath-resizer-0" [afe0c71b-407a-426f-9323-d835b8f2e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 02:25:38.928578  350425 system_pods.go:89] "csi-hostpathplugin-lmkkf" [c8b973a0-b84c-4c2d-b283-00643dfccac7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 02:25:38.928589  350425 system_pods.go:89] "etcd-addons-831846" [bc9df316-a146-46c0-a791-a90e33b10de5] Running
	I1124 02:25:38.928595  350425 system_pods.go:89] "kindnet-8rv8j" [e51a1ca8-a9af-4c1f-bdaf-27b503467a22] Running
	I1124 02:25:38.928603  350425 system_pods.go:89] "kube-apiserver-addons-831846" [fb66712c-5e47-4256-906c-544eac5dbb55] Running
	I1124 02:25:38.928608  350425 system_pods.go:89] "kube-controller-manager-addons-831846" [dcc8ac48-94da-440d-aa7c-f5563637926a] Running
	I1124 02:25:38.928616  350425 system_pods.go:89] "kube-ingress-dns-minikube" [a8c07c0c-a689-4a36-98ed-b97c0d8c59e2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 02:25:38.928619  350425 system_pods.go:89] "kube-proxy-78b65" [78176f75-edaa-46ac-803c-1f08847b0345] Running
	I1124 02:25:38.928625  350425 system_pods.go:89] "kube-scheduler-addons-831846" [91ee0d85-a527-476c-bedd-b5ec3faa6ee8] Running
	I1124 02:25:38.928632  350425 system_pods.go:89] "metrics-server-85b7d694d7-jmkg5" [a7773bc3-3047-45ae-ac07-238fe7a6282f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 02:25:38.928638  350425 system_pods.go:89] "nvidia-device-plugin-daemonset-gf6tr" [59b3a9aa-53dd-4231-97bd-2015d666639c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 02:25:38.928643  350425 system_pods.go:89] "registry-6b586f9694-fmpk9" [f51b7a5d-73cd-404e-87db-f7b56c46e8fc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 02:25:38.928649  350425 system_pods.go:89] "registry-creds-764b6fb674-h45vm" [8eac9d40-11df-4d1b-b4ed-05e91b6db498] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 02:25:38.928654  350425 system_pods.go:89] "registry-proxy-qnxkh" [492c9528-7caf-47b5-86ce-62e1cf455391] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 02:25:38.928659  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5mjkq" [77bb303b-f029-49fb-bc4a-12059b110a5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:38.928667  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hhf7t" [cffe1923-8bb2-43b0-a27d-15c013c3e481] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:38.928672  350425 system_pods.go:89] "storage-provisioner" [2bb8d9da-7fe4-4c71-b117-512944dd1208] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 02:25:38.928689  350425 retry.go:31] will retry after 339.908579ms: missing components: kube-dns
	I1124 02:25:38.991612  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:39.070309  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:39.070383  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:39.273969  350425 system_pods.go:86] 20 kube-system pods found
	I1124 02:25:39.274007  350425 system_pods.go:89] "amd-gpu-device-plugin-6f6fp" [f718fb18-20f0-4f35-931c-3a308c345e99] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 02:25:39.274019  350425 system_pods.go:89] "coredns-66bc5c9577-rdmxf" [89daa76d-6cb5-46b6-80c2-e6feea646c06] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 02:25:39.274029  350425 system_pods.go:89] "csi-hostpath-attacher-0" [dfd1a64e-cd18-4f88-9ae8-933351bf5cff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 02:25:39.274038  350425 system_pods.go:89] "csi-hostpath-resizer-0" [afe0c71b-407a-426f-9323-d835b8f2e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 02:25:39.274048  350425 system_pods.go:89] "csi-hostpathplugin-lmkkf" [c8b973a0-b84c-4c2d-b283-00643dfccac7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 02:25:39.274057  350425 system_pods.go:89] "etcd-addons-831846" [bc9df316-a146-46c0-a791-a90e33b10de5] Running
	I1124 02:25:39.274065  350425 system_pods.go:89] "kindnet-8rv8j" [e51a1ca8-a9af-4c1f-bdaf-27b503467a22] Running
	I1124 02:25:39.274074  350425 system_pods.go:89] "kube-apiserver-addons-831846" [fb66712c-5e47-4256-906c-544eac5dbb55] Running
	I1124 02:25:39.274081  350425 system_pods.go:89] "kube-controller-manager-addons-831846" [dcc8ac48-94da-440d-aa7c-f5563637926a] Running
	I1124 02:25:39.274093  350425 system_pods.go:89] "kube-ingress-dns-minikube" [a8c07c0c-a689-4a36-98ed-b97c0d8c59e2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 02:25:39.274098  350425 system_pods.go:89] "kube-proxy-78b65" [78176f75-edaa-46ac-803c-1f08847b0345] Running
	I1124 02:25:39.274105  350425 system_pods.go:89] "kube-scheduler-addons-831846" [91ee0d85-a527-476c-bedd-b5ec3faa6ee8] Running
	I1124 02:25:39.274114  350425 system_pods.go:89] "metrics-server-85b7d694d7-jmkg5" [a7773bc3-3047-45ae-ac07-238fe7a6282f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 02:25:39.274128  350425 system_pods.go:89] "nvidia-device-plugin-daemonset-gf6tr" [59b3a9aa-53dd-4231-97bd-2015d666639c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 02:25:39.274139  350425 system_pods.go:89] "registry-6b586f9694-fmpk9" [f51b7a5d-73cd-404e-87db-f7b56c46e8fc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 02:25:39.274154  350425 system_pods.go:89] "registry-creds-764b6fb674-h45vm" [8eac9d40-11df-4d1b-b4ed-05e91b6db498] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 02:25:39.274163  350425 system_pods.go:89] "registry-proxy-qnxkh" [492c9528-7caf-47b5-86ce-62e1cf455391] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 02:25:39.274172  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5mjkq" [77bb303b-f029-49fb-bc4a-12059b110a5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:39.274181  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hhf7t" [cffe1923-8bb2-43b0-a27d-15c013c3e481] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:39.274188  350425 system_pods.go:89] "storage-provisioner" [2bb8d9da-7fe4-4c71-b117-512944dd1208] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 02:25:39.274214  350425 retry.go:31] will retry after 442.797405ms: missing components: kube-dns
	I1124 02:25:39.296834  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:39.490754  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:39.570959  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:39.571204  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:39.722201  350425 system_pods.go:86] 20 kube-system pods found
	I1124 02:25:39.722235  350425 system_pods.go:89] "amd-gpu-device-plugin-6f6fp" [f718fb18-20f0-4f35-931c-3a308c345e99] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 02:25:39.722246  350425 system_pods.go:89] "coredns-66bc5c9577-rdmxf" [89daa76d-6cb5-46b6-80c2-e6feea646c06] Running
	I1124 02:25:39.722255  350425 system_pods.go:89] "csi-hostpath-attacher-0" [dfd1a64e-cd18-4f88-9ae8-933351bf5cff] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 02:25:39.722260  350425 system_pods.go:89] "csi-hostpath-resizer-0" [afe0c71b-407a-426f-9323-d835b8f2e8d5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 02:25:39.722265  350425 system_pods.go:89] "csi-hostpathplugin-lmkkf" [c8b973a0-b84c-4c2d-b283-00643dfccac7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 02:25:39.722269  350425 system_pods.go:89] "etcd-addons-831846" [bc9df316-a146-46c0-a791-a90e33b10de5] Running
	I1124 02:25:39.722274  350425 system_pods.go:89] "kindnet-8rv8j" [e51a1ca8-a9af-4c1f-bdaf-27b503467a22] Running
	I1124 02:25:39.722277  350425 system_pods.go:89] "kube-apiserver-addons-831846" [fb66712c-5e47-4256-906c-544eac5dbb55] Running
	I1124 02:25:39.722281  350425 system_pods.go:89] "kube-controller-manager-addons-831846" [dcc8ac48-94da-440d-aa7c-f5563637926a] Running
	I1124 02:25:39.722287  350425 system_pods.go:89] "kube-ingress-dns-minikube" [a8c07c0c-a689-4a36-98ed-b97c0d8c59e2] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 02:25:39.722290  350425 system_pods.go:89] "kube-proxy-78b65" [78176f75-edaa-46ac-803c-1f08847b0345] Running
	I1124 02:25:39.722294  350425 system_pods.go:89] "kube-scheduler-addons-831846" [91ee0d85-a527-476c-bedd-b5ec3faa6ee8] Running
	I1124 02:25:39.722301  350425 system_pods.go:89] "metrics-server-85b7d694d7-jmkg5" [a7773bc3-3047-45ae-ac07-238fe7a6282f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 02:25:39.722307  350425 system_pods.go:89] "nvidia-device-plugin-daemonset-gf6tr" [59b3a9aa-53dd-4231-97bd-2015d666639c] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 02:25:39.722312  350425 system_pods.go:89] "registry-6b586f9694-fmpk9" [f51b7a5d-73cd-404e-87db-f7b56c46e8fc] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 02:25:39.722317  350425 system_pods.go:89] "registry-creds-764b6fb674-h45vm" [8eac9d40-11df-4d1b-b4ed-05e91b6db498] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 02:25:39.722322  350425 system_pods.go:89] "registry-proxy-qnxkh" [492c9528-7caf-47b5-86ce-62e1cf455391] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 02:25:39.722330  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-5mjkq" [77bb303b-f029-49fb-bc4a-12059b110a5a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:39.722335  350425 system_pods.go:89] "snapshot-controller-7d9fbc56b8-hhf7t" [cffe1923-8bb2-43b0-a27d-15c013c3e481] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 02:25:39.722338  350425 system_pods.go:89] "storage-provisioner" [2bb8d9da-7fe4-4c71-b117-512944dd1208] Running
	I1124 02:25:39.722346  350425 system_pods.go:126] duration metric: took 1.02674263s to wait for k8s-apps to be running ...
	I1124 02:25:39.722356  350425 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 02:25:39.722398  350425 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 02:25:39.736051  350425 system_svc.go:56] duration metric: took 13.685185ms WaitForService to wait for kubelet
	I1124 02:25:39.736082  350425 kubeadm.go:587] duration metric: took 42.203274661s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 02:25:39.736107  350425 node_conditions.go:102] verifying NodePressure condition ...
	I1124 02:25:39.738344  350425 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 02:25:39.738376  350425 node_conditions.go:123] node cpu capacity is 8
	I1124 02:25:39.738399  350425 node_conditions.go:105] duration metric: took 2.286417ms to run NodePressure ...
	I1124 02:25:39.738419  350425 start.go:242] waiting for startup goroutines ...
	I1124 02:25:39.796318  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:39.991872  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:40.071046  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:40.071045  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:40.298206  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:40.492088  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:40.573703  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:40.574126  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:40.797902  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:40.990807  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:41.070597  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:41.070634  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:41.297726  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:41.491716  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:41.571221  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:41.571292  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:41.797624  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:41.991431  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:42.091665  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:42.091741  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:42.297004  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:42.491168  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:42.591161  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:42.591349  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:42.797938  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:42.990798  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:43.070698  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:43.070791  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:43.297876  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:43.491168  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:43.571367  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:43.571437  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:43.797310  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:43.991167  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:44.070453  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:44.070517  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:44.297450  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:44.492238  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:44.572528  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:44.572739  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:44.797931  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:44.990775  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:45.070835  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:45.070872  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:45.298382  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:45.491734  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:45.570796  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:45.570879  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:45.798401  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:45.991542  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:46.071413  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:46.071611  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:46.297064  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:46.491361  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:46.571306  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:46.571497  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:46.861590  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:46.991159  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:47.070643  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:47.070712  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:47.297984  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:47.491408  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:47.571234  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:47.571450  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:47.797711  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:47.991560  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:48.071051  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:48.071072  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:48.298420  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:48.491570  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:48.570247  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:48.570269  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:48.797744  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:48.991255  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:49.071026  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:49.071235  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:49.297714  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:49.490422  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:49.571099  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:49.571216  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:49.797953  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:49.990730  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:50.070505  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:50.070756  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:50.297207  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:50.491401  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:50.571288  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:50.571471  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:50.797767  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:50.990209  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:51.070593  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:51.070643  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:51.296929  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:51.490936  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:51.570230  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:51.570342  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:51.797864  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:51.991286  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:52.071462  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:52.071462  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:52.297229  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:52.491080  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:52.570939  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:52.571091  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:52.797903  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:52.991520  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:53.071466  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:53.071483  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:53.297548  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:53.491479  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:53.570828  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:53.571084  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:53.797490  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:53.991640  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:54.070026  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:54.070144  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:54.297478  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:54.491486  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:54.571355  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:54.571651  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:54.797195  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:54.991535  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:55.070480  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:55.070510  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:55.297379  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:55.491660  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:55.592135  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:55.592161  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:55.796822  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:55.990313  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:56.070804  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:56.070829  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:56.297262  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:56.491743  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:56.570388  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:56.570397  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:56.797150  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:56.991019  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:57.070278  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:57.070297  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:57.296532  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:57.491143  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:57.570325  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:57.570353  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:57.797185  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:57.991734  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:58.071121  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:58.071299  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:58.297275  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:58.493392  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:58.571494  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:58.571545  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:58.797651  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:58.991126  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:59.071428  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:59.071524  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:59.296735  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:59.490327  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:25:59.571232  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:25:59.571268  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:25:59.797268  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:25:59.991186  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:00.070347  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:00.070448  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:00.297114  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:00.491434  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:00.571219  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:00.571312  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:00.796706  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:00.990520  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:01.069911  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:01.069946  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:01.297626  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:01.492045  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:01.570481  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:01.570523  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:01.797391  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:01.991331  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:02.070484  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:02.070496  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:02.297007  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:02.492314  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:02.571391  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:02.571398  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:02.797245  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:02.991447  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:03.070811  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:03.070877  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:03.297413  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:03.492866  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:03.573309  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 02:26:03.573914  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:03.798578  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:03.993101  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:04.071298  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:04.072298  350425 kapi.go:107] duration metric: took 1m5.004955371s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 02:26:04.297731  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:04.490266  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:04.570557  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:04.797580  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:04.991873  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:05.070803  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:05.297949  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:05.491811  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:05.570408  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:05.797548  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:05.991417  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:06.071627  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:06.297559  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:06.491153  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:06.572016  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:06.797239  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:06.992079  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:07.070541  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:07.297085  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:07.490427  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:07.569622  350425 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 02:26:07.797916  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:07.991451  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:08.075332  350425 kapi.go:107] duration metric: took 1m9.008032916s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 02:26:08.297549  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:08.491073  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:08.798592  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:08.992671  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:09.297139  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:09.491499  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:09.796919  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:09.991273  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:10.297775  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:10.490630  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:10.798175  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:10.992217  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:11.297291  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 02:26:11.491287  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:11.796491  350425 kapi.go:107] duration metric: took 1m6.002302696s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 02:26:11.797907  350425 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-831846 cluster.
	I1124 02:26:11.798965  350425 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 02:26:11.799915  350425 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 02:26:11.991728  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:12.491029  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:12.992038  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:13.490761  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:13.991344  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:14.490831  350425 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 02:26:14.990909  350425 kapi.go:107] duration metric: took 1m15.503183706s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 02:26:14.992794  350425 out.go:179] * Enabled addons: amd-gpu-device-plugin, ingress-dns, cloud-spanner, registry-creds, storage-provisioner-rancher, inspektor-gadget, nvidia-device-plugin, storage-provisioner, default-storageclass, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1124 02:26:14.993940  350425 addons.go:530] duration metric: took 1m17.461113384s for enable addons: enabled=[amd-gpu-device-plugin ingress-dns cloud-spanner registry-creds storage-provisioner-rancher inspektor-gadget nvidia-device-plugin storage-provisioner default-storageclass metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1124 02:26:14.993984  350425 start.go:247] waiting for cluster config update ...
	I1124 02:26:14.994010  350425 start.go:256] writing updated cluster config ...
	I1124 02:26:14.994291  350425 ssh_runner.go:195] Run: rm -f paused
	I1124 02:26:14.998351  350425 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 02:26:15.000903  350425 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-rdmxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.004367  350425 pod_ready.go:94] pod "coredns-66bc5c9577-rdmxf" is "Ready"
	I1124 02:26:15.004386  350425 pod_ready.go:86] duration metric: took 3.460188ms for pod "coredns-66bc5c9577-rdmxf" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.005993  350425 pod_ready.go:83] waiting for pod "etcd-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.009103  350425 pod_ready.go:94] pod "etcd-addons-831846" is "Ready"
	I1124 02:26:15.009125  350425 pod_ready.go:86] duration metric: took 3.115113ms for pod "etcd-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.010750  350425 pod_ready.go:83] waiting for pod "kube-apiserver-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.014072  350425 pod_ready.go:94] pod "kube-apiserver-addons-831846" is "Ready"
	I1124 02:26:15.014093  350425 pod_ready.go:86] duration metric: took 3.326624ms for pod "kube-apiserver-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.015698  350425 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.401167  350425 pod_ready.go:94] pod "kube-controller-manager-addons-831846" is "Ready"
	I1124 02:26:15.401191  350425 pod_ready.go:86] duration metric: took 385.474121ms for pod "kube-controller-manager-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:15.601541  350425 pod_ready.go:83] waiting for pod "kube-proxy-78b65" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:16.001548  350425 pod_ready.go:94] pod "kube-proxy-78b65" is "Ready"
	I1124 02:26:16.001584  350425 pod_ready.go:86] duration metric: took 400.020096ms for pod "kube-proxy-78b65" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:16.202402  350425 pod_ready.go:83] waiting for pod "kube-scheduler-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:16.601245  350425 pod_ready.go:94] pod "kube-scheduler-addons-831846" is "Ready"
	I1124 02:26:16.601275  350425 pod_ready.go:86] duration metric: took 398.843581ms for pod "kube-scheduler-addons-831846" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 02:26:16.601290  350425 pod_ready.go:40] duration metric: took 1.6029084s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 02:26:16.642752  350425 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 02:26:16.644366  350425 out.go:179] * Done! kubectl is now configured to use "addons-831846" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 02:26:14 addons-831846 crio[773]: time="2025-11-24T02:26:14.350273811Z" level=info msg="Starting container: 6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea" id=dde0d899-49d1-4a6c-874b-385a574c4bef name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 02:26:14 addons-831846 crio[773]: time="2025-11-24T02:26:14.352848452Z" level=info msg="Started container" PID=6263 containerID=6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea description=kube-system/csi-hostpathplugin-lmkkf/csi-snapshotter id=dde0d899-49d1-4a6c-874b-385a574c4bef name=/runtime.v1.RuntimeService/StartContainer sandboxID=3211430f96d272ab1f3296d4bd1432584fd43b799d67294865f630bd285d30d1
	Nov 24 02:26:17 addons-831846 crio[773]: time="2025-11-24T02:26:17.429451821Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c0d4a8bc-3110-49c1-9fad-4bbed770c928 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 02:26:17 addons-831846 crio[773]: time="2025-11-24T02:26:17.429515572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 02:26:17 addons-831846 crio[773]: time="2025-11-24T02:26:17.435373089Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1597d1f02aae2efb31297cc4f38d86f31afbec9cd414613cf50e236d5ac98b31 UID:1f1ea0f0-3e69-4c29-a085-19c46e304737 NetNS:/var/run/netns/35d27c02-ea06-4bb7-8424-4eee9dd0ac15 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004de410}] Aliases:map[]}"
	Nov 24 02:26:17 addons-831846 crio[773]: time="2025-11-24T02:26:17.435400013Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 02:26:17 addons-831846 crio[773]: time="2025-11-24T02:26:17.452795226Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:1597d1f02aae2efb31297cc4f38d86f31afbec9cd414613cf50e236d5ac98b31 UID:1f1ea0f0-3e69-4c29-a085-19c46e304737 NetNS:/var/run/netns/35d27c02-ea06-4bb7-8424-4eee9dd0ac15 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0004de410}] Aliases:map[]}"
	Nov 24 02:26:17 addons-831846 crio[773]: time="2025-11-24T02:26:17.452925224Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 02:26:17 addons-831846 crio[773]: time="2025-11-24T02:26:17.453617449Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 02:26:17 addons-831846 crio[773]: time="2025-11-24T02:26:17.454454556Z" level=info msg="Ran pod sandbox 1597d1f02aae2efb31297cc4f38d86f31afbec9cd414613cf50e236d5ac98b31 with infra container: default/busybox/POD" id=c0d4a8bc-3110-49c1-9fad-4bbed770c928 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 02:26:17 addons-831846 crio[773]: time="2025-11-24T02:26:17.457672048Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=48065ce2-182f-4a25-92f7-362dc1c4c9f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 02:26:17 addons-831846 crio[773]: time="2025-11-24T02:26:17.457796859Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=48065ce2-182f-4a25-92f7-362dc1c4c9f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 02:26:17 addons-831846 crio[773]: time="2025-11-24T02:26:17.457830788Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=48065ce2-182f-4a25-92f7-362dc1c4c9f1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 02:26:17 addons-831846 crio[773]: time="2025-11-24T02:26:17.458320385Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=1427dd61-415e-480b-8f93-eea6e6b35ab9 name=/runtime.v1.ImageService/PullImage
	Nov 24 02:26:17 addons-831846 crio[773]: time="2025-11-24T02:26:17.459564573Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 02:26:18 addons-831846 crio[773]: time="2025-11-24T02:26:18.069290801Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=1427dd61-415e-480b-8f93-eea6e6b35ab9 name=/runtime.v1.ImageService/PullImage
	Nov 24 02:26:18 addons-831846 crio[773]: time="2025-11-24T02:26:18.069841294Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=05949363-9934-40b1-9248-10b8b73e3512 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 02:26:18 addons-831846 crio[773]: time="2025-11-24T02:26:18.071263388Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7dc78340-399b-4a37-8f98-bc8f3d6e900e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 02:26:18 addons-831846 crio[773]: time="2025-11-24T02:26:18.074911301Z" level=info msg="Creating container: default/busybox/busybox" id=73831ef2-ad94-4e87-bc59-0344aba960d9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 02:26:18 addons-831846 crio[773]: time="2025-11-24T02:26:18.075046332Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 02:26:18 addons-831846 crio[773]: time="2025-11-24T02:26:18.080432612Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 02:26:18 addons-831846 crio[773]: time="2025-11-24T02:26:18.080852785Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 02:26:18 addons-831846 crio[773]: time="2025-11-24T02:26:18.111662579Z" level=info msg="Created container 74ffea65b1dc07ca773c5e1f05adb54503aca3c3fd461dae9c2ce437afc8aa5f: default/busybox/busybox" id=73831ef2-ad94-4e87-bc59-0344aba960d9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 02:26:18 addons-831846 crio[773]: time="2025-11-24T02:26:18.115253893Z" level=info msg="Starting container: 74ffea65b1dc07ca773c5e1f05adb54503aca3c3fd461dae9c2ce437afc8aa5f" id=fae1d2d3-c81a-48d3-ad28-f3dde7b2d965 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 02:26:18 addons-831846 crio[773]: time="2025-11-24T02:26:18.117083623Z" level=info msg="Started container" PID=6368 containerID=74ffea65b1dc07ca773c5e1f05adb54503aca3c3fd461dae9c2ce437afc8aa5f description=default/busybox/busybox id=fae1d2d3-c81a-48d3-ad28-f3dde7b2d965 name=/runtime.v1.RuntimeService/StartContainer sandboxID=1597d1f02aae2efb31297cc4f38d86f31afbec9cd414613cf50e236d5ac98b31
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	74ffea65b1dc0       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   1597d1f02aae2       busybox                                    default
	6a692d55674d0       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          12 seconds ago       Running             csi-snapshotter                          0                   3211430f96d27       csi-hostpathplugin-lmkkf                   kube-system
	80aafa11113da       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          13 seconds ago       Running             csi-provisioner                          0                   3211430f96d27       csi-hostpathplugin-lmkkf                   kube-system
	ab306614f72c4       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            14 seconds ago       Running             liveness-probe                           0                   3211430f96d27       csi-hostpathplugin-lmkkf                   kube-system
	e3eafecec2c9c       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 15 seconds ago       Running             gcp-auth                                 0                   650604de0c37c       gcp-auth-78565c9fb4-g9pxh                  gcp-auth
	87464323b0915       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           16 seconds ago       Running             hostpath                                 0                   3211430f96d27       csi-hostpathplugin-lmkkf                   kube-system
	04b1500539ee9       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            17 seconds ago       Running             gadget                                   0                   3ef36a913e4fa       gadget-465gm                               gadget
	5b6dca54cd1ba       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                19 seconds ago       Running             node-driver-registrar                    0                   3211430f96d27       csi-hostpathplugin-lmkkf                   kube-system
	dce04ca4ae70e       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             20 seconds ago       Running             controller                               0                   de71ba0fd8f3c       ingress-nginx-controller-6c8bf45fb-645nl   ingress-nginx
	cb685f958daee       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              23 seconds ago       Running             registry-proxy                           0                   1877d0221d21f       registry-proxy-qnxkh                       kube-system
	d36d11b30d634       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     24 seconds ago       Running             amd-gpu-device-plugin                    0                   580fcf6ee6a7d       amd-gpu-device-plugin-6f6fp                kube-system
	488f8f5aecccf       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      26 seconds ago       Running             volume-snapshot-controller               0                   4eecb3937ec7e       snapshot-controller-7d9fbc56b8-5mjkq       kube-system
	3a2f505270ed3       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   26 seconds ago       Running             csi-external-health-monitor-controller   0                   3211430f96d27       csi-hostpathplugin-lmkkf                   kube-system
	98efe6015f81d       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     27 seconds ago       Running             nvidia-device-plugin-ctr                 0                   f4914870f3b6c       nvidia-device-plugin-daemonset-gf6tr       kube-system
	56d774514fe29       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   29 seconds ago       Exited              patch                                    0                   a3bf6da6a0e2a       gcp-auth-certs-patch-8j74d                 gcp-auth
	0976fdc9fed0d       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   30 seconds ago       Exited              create                                   0                   e6c7402e0e440       gcp-auth-certs-create-xhfc6                gcp-auth
	20ce1e4b1e717       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             30 seconds ago       Running             local-path-provisioner                   0                   d44e54ff5a3e7       local-path-provisioner-648f6765c9-r8p78    local-path-storage
	e716ff90bdcb5       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   31 seconds ago       Exited              patch                                    0                   2e45b3ee81df6       ingress-nginx-admission-patch-sd9fv        ingress-nginx
	c290117d4f470       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           31 seconds ago       Running             registry                                 0                   9f4712529e2ba       registry-6b586f9694-fmpk9                  kube-system
	87618f57415b4       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              33 seconds ago       Running             yakd                                     0                   7c919e329d34b       yakd-dashboard-5ff678cb9-2mtd5             yakd-dashboard
	1a1040cba9828       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             36 seconds ago       Running             csi-attacher                             0                   af193c1fc2b23       csi-hostpath-attacher-0                    kube-system
	7315919ab4d42       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              37 seconds ago       Running             csi-resizer                              0                   396609098de55       csi-hostpath-resizer-0                     kube-system
	f1c6f93620483       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               38 seconds ago       Running             minikube-ingress-dns                     0                   c1398ac562604       kube-ingress-dns-minikube                  kube-system
	ab2275b7d143a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      43 seconds ago       Running             volume-snapshot-controller               0                   dd5a845edb903       snapshot-controller-7d9fbc56b8-hhf7t       kube-system
	cc283ff38672b       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   44 seconds ago       Exited              create                                   0                   9577123c0f1ed       ingress-nginx-admission-create-lkj84       ingress-nginx
	70c9e18564a35       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               44 seconds ago       Running             cloud-spanner-emulator                   0                   cd82be8668084       cloud-spanner-emulator-5bdddb765-wf4l7     default
	211b6a7c5a0f7       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        47 seconds ago       Running             metrics-server                           0                   969f4caf1f117       metrics-server-85b7d694d7-jmkg5            kube-system
	6be1f10bddb9d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             48 seconds ago       Running             storage-provisioner                      0                   f9a6c6fa1e19b       storage-provisioner                        kube-system
	109ca0df89d74       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             48 seconds ago       Running             coredns                                  0                   fb91c1d3e22ea       coredns-66bc5c9577-rdmxf                   kube-system
	aac60890d17a3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   d5c64b563aaf9       kube-proxy-78b65                           kube-system
	837f7d173b2d6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   b60e8128a7968       kindnet-8rv8j                              kube-system
	3949ef8e07cb2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   e79da0e4cb0e4       kube-scheduler-addons-831846               kube-system
	9c9ff4c6ef4b7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   7e1df73d57368       kube-apiserver-addons-831846               kube-system
	a1f1f10128909       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   0e84c8f3a8b7b       kube-controller-manager-addons-831846      kube-system
	8704fbfea0bb0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   45ab090f2b747       etcd-addons-831846                         kube-system
	
	
	==> coredns [109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314] <==
	[INFO] 10.244.0.19:41501 - 64923 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000163701s
	[INFO] 10.244.0.19:59767 - 29620 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000085803s
	[INFO] 10.244.0.19:59767 - 29338 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000108472s
	[INFO] 10.244.0.19:49182 - 18567 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000097213s
	[INFO] 10.244.0.19:49182 - 18863 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000140939s
	[INFO] 10.244.0.19:58855 - 40420 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000071958s
	[INFO] 10.244.0.19:58855 - 40126 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000091252s
	[INFO] 10.244.0.19:58012 - 28677 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000044834s
	[INFO] 10.244.0.19:58012 - 28424 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000081903s
	[INFO] 10.244.0.19:38888 - 4753 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000101952s
	[INFO] 10.244.0.19:38888 - 4987 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000153239s
	[INFO] 10.244.0.22:56253 - 22976 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00018798s
	[INFO] 10.244.0.22:51533 - 57301 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000280337s
	[INFO] 10.244.0.22:59343 - 47616 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118928s
	[INFO] 10.244.0.22:48642 - 1018 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000204895s
	[INFO] 10.244.0.22:44219 - 49401 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000148823s
	[INFO] 10.244.0.22:47080 - 2782 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086745s
	[INFO] 10.244.0.22:58455 - 62682 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.004583697s
	[INFO] 10.244.0.22:53011 - 65406 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.005329066s
	[INFO] 10.244.0.22:41941 - 57592 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004696961s
	[INFO] 10.244.0.22:58228 - 37793 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006173156s
	[INFO] 10.244.0.22:57028 - 2235 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00552887s
	[INFO] 10.244.0.22:58224 - 62800 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005634499s
	[INFO] 10.244.0.22:39374 - 22570 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000803743s
	[INFO] 10.244.0.22:36319 - 53376 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001301863s
	
	
	==> describe nodes <==
	Name:               addons-831846
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-831846
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=addons-831846
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T02_24_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-831846
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-831846"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 02:24:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-831846
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 02:26:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 02:26:22 +0000   Mon, 24 Nov 2025 02:24:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 02:26:22 +0000   Mon, 24 Nov 2025 02:24:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 02:26:22 +0000   Mon, 24 Nov 2025 02:24:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 02:26:22 +0000   Mon, 24 Nov 2025 02:25:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-831846
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                66ee3362-62e8-4675-bb66-01d23f6ba5e0
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  default                     cloud-spanner-emulator-5bdddb765-wf4l7      0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  gadget                      gadget-465gm                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  gcp-auth                    gcp-auth-78565c9fb4-g9pxh                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-645nl    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         87s
	  kube-system                 amd-gpu-device-plugin-6f6fp                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 coredns-66bc5c9577-rdmxf                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     89s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 csi-hostpathplugin-lmkkf                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kube-system                 etcd-addons-831846                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         95s
	  kube-system                 kindnet-8rv8j                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      89s
	  kube-system                 kube-apiserver-addons-831846                250m (3%)     0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-controller-manager-addons-831846       200m (2%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-78b65                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-scheduler-addons-831846                100m (1%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 metrics-server-85b7d694d7-jmkg5             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         89s
	  kube-system                 nvidia-device-plugin-daemonset-gf6tr        0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 registry-6b586f9694-fmpk9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-creds-764b6fb674-h45vm             0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-proxy-qnxkh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 snapshot-controller-7d9fbc56b8-5mjkq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 snapshot-controller-7d9fbc56b8-hhf7t        0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  local-path-storage          local-path-provisioner-648f6765c9-r8p78     0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-2mtd5              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 89s                  kube-proxy       
	  Normal  Starting                 101s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  101s (x8 over 101s)  kubelet          Node addons-831846 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s (x8 over 101s)  kubelet          Node addons-831846 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s (x8 over 101s)  kubelet          Node addons-831846 status is now: NodeHasSufficientPID
	  Normal  Starting                 96s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s                  kubelet          Node addons-831846 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s                  kubelet          Node addons-831846 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s                  kubelet          Node addons-831846 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           91s                  node-controller  Node addons-831846 event: Registered Node addons-831846 in Controller
	  Normal  NodeReady                49s                  kubelet          Node addons-831846 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000021] ll header: 00000000: 36 fa 08 cb 2a d4 f6 dd 2d 1d 04 33 08 00
	[Nov24 02:15] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 79 23 49 4f 68 08 06
	[ +13.555325] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff e6 36 09 3d c8 41 08 06
	[  +0.000314] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff f2 79 23 49 4f 68 08 06
	[Nov24 02:16] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 68 f0 1b 94 62 08 06
	[  +7.419677] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 86 ca fc 5f 92 50 08 06
	[  +4.493392] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 96 f4 63 53 9c 06 08 06
	[  +8.597525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 32 0f 6e 9a 34 e8 08 06
	[  +0.000747] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 56 68 f0 1b 94 62 08 06
	[  +8.304530] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 06 74 01 40 70 30 08 06
	[  +0.000650] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 96 f4 63 53 9c 06 08 06
	[Nov24 02:17] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 a4 5e 1f c0 90 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 ca fc 5f 92 50 08 06
	
	
	==> etcd [8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971] <==
	{"level":"warn","ts":"2025-11-24T02:24:48.734398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.740078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.747967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.754578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.761678Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.775599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.781597Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.787170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:48.832605Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:59.949947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:24:59.956148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:25:26.233245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:25:26.240228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:25:26.264837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56442","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T02:26:18.883302Z","caller":"traceutil/trace.go:172","msg":"trace[636580253] transaction","detail":"{read_only:false; response_revision:1255; number_of_response:1; }","duration":"118.447794ms","start":"2025-11-24T02:26:18.764832Z","end":"2025-11-24T02:26:18.883279Z","steps":["trace[636580253] 'process raft request'  (duration: 118.333623ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:26:18.883351Z","caller":"traceutil/trace.go:172","msg":"trace[1052960422] transaction","detail":"{read_only:false; response_revision:1259; number_of_response:1; }","duration":"115.094279ms","start":"2025-11-24T02:26:18.768245Z","end":"2025-11-24T02:26:18.883339Z","steps":["trace[1052960422] 'process raft request'  (duration: 115.059494ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:26:18.883355Z","caller":"traceutil/trace.go:172","msg":"trace[509223276] transaction","detail":"{read_only:false; response_revision:1254; number_of_response:1; }","duration":"118.815185ms","start":"2025-11-24T02:26:18.764525Z","end":"2025-11-24T02:26:18.883340Z","steps":["trace[509223276] 'process raft request'  (duration: 118.549037ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:26:18.883388Z","caller":"traceutil/trace.go:172","msg":"trace[1791445000] transaction","detail":"{read_only:false; response_revision:1257; number_of_response:1; }","duration":"118.518913ms","start":"2025-11-24T02:26:18.764856Z","end":"2025-11-24T02:26:18.883375Z","steps":["trace[1791445000] 'process raft request'  (duration: 118.378576ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:26:18.883544Z","caller":"traceutil/trace.go:172","msg":"trace[1797184880] transaction","detail":"{read_only:false; response_revision:1258; number_of_response:1; }","duration":"117.15402ms","start":"2025-11-24T02:26:18.766382Z","end":"2025-11-24T02:26:18.883536Z","steps":["trace[1797184880] 'process raft request'  (duration: 116.884835ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T02:26:18.883321Z","caller":"traceutil/trace.go:172","msg":"trace[71626399] transaction","detail":"{read_only:false; response_revision:1256; number_of_response:1; }","duration":"118.465711ms","start":"2025-11-24T02:26:18.764840Z","end":"2025-11-24T02:26:18.883306Z","steps":["trace[71626399] 'process raft request'  (duration: 118.364666ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T02:26:19.068301Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.284043ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/busybox\" limit:1 ","response":"range_response_count:1 size:3206"}
	{"level":"info","ts":"2025-11-24T02:26:19.068373Z","caller":"traceutil/trace.go:172","msg":"trace[1740405195] range","detail":"{range_begin:/registry/pods/default/busybox; range_end:; response_count:1; response_revision:1259; }","duration":"177.372947ms","start":"2025-11-24T02:26:18.890985Z","end":"2025-11-24T02:26:19.068357Z","steps":["trace[1740405195] 'agreement among raft nodes before linearized reading'  (duration: 76.131192ms)","trace[1740405195] 'range keys from in-memory index tree'  (duration: 101.121916ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T02:26:19.068908Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.180565ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041523998558254 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1238 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T02:26:19.069038Z","caller":"traceutil/trace.go:172","msg":"trace[687560575] transaction","detail":"{read_only:false; response_revision:1260; number_of_response:1; }","duration":"181.019435ms","start":"2025-11-24T02:26:18.887995Z","end":"2025-11-24T02:26:19.069014Z","steps":["trace[687560575] 'process raft request'  (duration: 79.161955ms)","trace[687560575] 'compare'  (duration: 101.092988ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T02:26:19.069218Z","caller":"traceutil/trace.go:172","msg":"trace[368278866] transaction","detail":"{read_only:false; response_revision:1261; number_of_response:1; }","duration":"177.811285ms","start":"2025-11-24T02:26:18.891393Z","end":"2025-11-24T02:26:19.069204Z","steps":["trace[368278866] 'process raft request'  (duration: 177.58233ms)"],"step_count":1}
	
	
	==> gcp-auth [e3eafecec2c9ca08ae5afd8a3456082dbd372ee93aa1b42ce7e982c2a894a689] <==
	2025/11/24 02:26:11 GCP Auth Webhook started!
	2025/11/24 02:26:16 Ready to marshal response ...
	2025/11/24 02:26:16 Ready to write response ...
	2025/11/24 02:26:17 Ready to marshal response ...
	2025/11/24 02:26:17 Ready to write response ...
	2025/11/24 02:26:17 Ready to marshal response ...
	2025/11/24 02:26:17 Ready to write response ...
	
	
	==> kernel <==
	 02:26:27 up  1:08,  0 user,  load average: 1.63, 1.73, 1.94
	Linux addons-831846 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd] <==
	I1124 02:24:57.695108       1 main.go:148] setting mtu 1500 for CNI 
	I1124 02:24:57.695127       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 02:24:57.695152       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T02:24:57Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 02:24:57.897185       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 02:24:57.965015       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 02:24:57.965053       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 02:24:57.966263       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 02:25:27.898405       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 02:25:27.966237       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 02:25:27.966237       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 02:25:27.966259       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1124 02:25:29.565406       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 02:25:29.565439       1 metrics.go:72] Registering metrics
	I1124 02:25:29.565505       1 controller.go:711] "Syncing nftables rules"
	I1124 02:25:37.897230       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:25:37.897266       1 main.go:301] handling current node
	I1124 02:25:47.897011       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:25:47.897050       1 main.go:301] handling current node
	I1124 02:25:57.896739       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:25:57.896766       1 main.go:301] handling current node
	I1124 02:26:07.897086       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:26:07.897140       1 main.go:301] handling current node
	I1124 02:26:17.897004       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:26:17.897036       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90] <==
	I1124 02:25:05.742985       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.98.41.49"}
	W1124 02:25:26.233149       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 02:25:26.240275       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1124 02:25:26.258030       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 02:25:26.264803       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 02:25:38.162816       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.41.49:443: connect: connection refused
	E1124 02:25:38.162865       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.41.49:443: connect: connection refused" logger="UnhandledError"
	W1124 02:25:38.162908       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.41.49:443: connect: connection refused
	E1124 02:25:38.162935       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.41.49:443: connect: connection refused" logger="UnhandledError"
	W1124 02:25:38.188370       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.41.49:443: connect: connection refused
	E1124 02:25:38.188409       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.41.49:443: connect: connection refused" logger="UnhandledError"
	W1124 02:25:38.190644       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.98.41.49:443: connect: connection refused
	E1124 02:25:38.190679       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.98.41.49:443: connect: connection refused" logger="UnhandledError"
	W1124 02:25:41.580202       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 02:25:41.580240       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.203.36:443: connect: connection refused" logger="UnhandledError"
	E1124 02:25:41.580286       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1124 02:25:41.580678       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.203.36:443: connect: connection refused" logger="UnhandledError"
	E1124 02:25:41.586281       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.203.36:443: connect: connection refused" logger="UnhandledError"
	E1124 02:25:41.606985       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.104.203.36:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.104.203.36:443: connect: connection refused" logger="UnhandledError"
	I1124 02:25:41.680453       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 02:26:25.262158       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34290: use of closed network connection
	E1124 02:26:25.401547       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:34322: use of closed network connection
	
	
	==> kube-controller-manager [a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf] <==
	I1124 02:24:56.218875       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 02:24:56.218881       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 02:24:56.218930       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 02:24:56.218937       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 02:24:56.219019       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 02:24:56.219033       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 02:24:56.219019       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 02:24:56.219364       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 02:24:56.219449       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 02:24:56.220222       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 02:24:56.220231       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 02:24:56.220250       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 02:24:56.222493       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 02:24:56.222605       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 02:24:56.223817       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 02:24:56.229046       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 02:24:56.241466       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 02:25:26.227699       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1124 02:25:26.227863       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1124 02:25:26.227953       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1124 02:25:26.250172       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1124 02:25:26.253295       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 02:25:26.328301       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 02:25:26.353412       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 02:25:41.225416       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf] <==
	I1124 02:24:57.488624       1 server_linux.go:53] "Using iptables proxy"
	I1124 02:24:57.574905       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 02:24:57.677866       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 02:24:57.677929       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 02:24:57.678911       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 02:24:57.790408       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 02:24:57.790550       1 server_linux.go:132] "Using iptables Proxier"
	I1124 02:24:57.809782       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 02:24:57.812072       1 server.go:527] "Version info" version="v1.34.1"
	I1124 02:24:57.812099       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:24:57.814296       1 config.go:200] "Starting service config controller"
	I1124 02:24:57.814390       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 02:24:57.814442       1 config.go:106] "Starting endpoint slice config controller"
	I1124 02:24:57.814468       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 02:24:57.814504       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 02:24:57.814528       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 02:24:57.815545       1 config.go:309] "Starting node config controller"
	I1124 02:24:57.816811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 02:24:57.816871       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 02:24:57.916080       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 02:24:57.916127       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 02:24:57.916163       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76] <==
	E1124 02:24:49.225926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:24:49.226043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:24:49.226579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:24:49.226615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:24:49.226621       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:24:49.226688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:24:49.226742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 02:24:49.226740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 02:24:49.226793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:24:49.226822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 02:24:49.226845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:24:49.226917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 02:24:49.226943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:24:49.226930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:24:49.227019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 02:24:49.227033       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:24:50.084796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 02:24:50.086544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:24:50.100476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:24:50.116355       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 02:24:50.156384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:24:50.248073       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:24:50.433678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 02:24:50.491152       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 02:24:53.523908       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 02:25:58 addons-831846 kubelet[1292]: I1124 02:25:58.768006    1292 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bdbaf7a-7363-4c11-bdb2-be50ff0fe483-kube-api-access-n4s9h" (OuterVolumeSpecName: "kube-api-access-n4s9h") pod "3bdbaf7a-7363-4c11-bdb2-be50ff0fe483" (UID: "3bdbaf7a-7363-4c11-bdb2-be50ff0fe483"). InnerVolumeSpecName "kube-api-access-n4s9h". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 24 02:25:58 addons-831846 kubelet[1292]: I1124 02:25:58.866132    1292 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n4s9h\" (UniqueName: \"kubernetes.io/projected/3bdbaf7a-7363-4c11-bdb2-be50ff0fe483-kube-api-access-n4s9h\") on node \"addons-831846\" DevicePath \"\""
	Nov 24 02:25:58 addons-831846 kubelet[1292]: I1124 02:25:58.866165    1292 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-66796\" (UniqueName: \"kubernetes.io/projected/6c6b8e35-c550-4497-a6f2-cd1bf84717ed-kube-api-access-66796\") on node \"addons-831846\" DevicePath \"\""
	Nov 24 02:25:59 addons-831846 kubelet[1292]: I1124 02:25:59.631729    1292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6c7402e0e44008b4f1559e93030edbafc148c3da78fa26fb84e604d79c71522"
	Nov 24 02:25:59 addons-831846 kubelet[1292]: I1124 02:25:59.633368    1292 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3bf6da6a0e2a3b89c91f0e27f6faa4c8d209d709d89d36daf2309eb8c7ba7ba"
	Nov 24 02:26:00 addons-831846 kubelet[1292]: I1124 02:26:00.638535    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gf6tr" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:26:00 addons-831846 kubelet[1292]: I1124 02:26:00.649008    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-gf6tr" podStartSLOduration=1.467714419 podStartE2EDuration="22.648987384s" podCreationTimestamp="2025-11-24 02:25:38 +0000 UTC" firstStartedPulling="2025-11-24 02:25:38.616946524 +0000 UTC m=+47.287658671" lastFinishedPulling="2025-11-24 02:25:59.798219491 +0000 UTC m=+68.468931636" observedRunningTime="2025-11-24 02:26:00.648473766 +0000 UTC m=+69.319185949" watchObservedRunningTime="2025-11-24 02:26:00.648987384 +0000 UTC m=+69.319699539"
	Nov 24 02:26:01 addons-831846 kubelet[1292]: I1124 02:26:01.644446    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-gf6tr" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:26:01 addons-831846 kubelet[1292]: I1124 02:26:01.654538    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/snapshot-controller-7d9fbc56b8-5mjkq" podStartSLOduration=40.461675343 podStartE2EDuration="1m2.654522509s" podCreationTimestamp="2025-11-24 02:24:59 +0000 UTC" firstStartedPulling="2025-11-24 02:25:38.618906714 +0000 UTC m=+47.289618847" lastFinishedPulling="2025-11-24 02:26:00.811753876 +0000 UTC m=+69.482466013" observedRunningTime="2025-11-24 02:26:01.653600306 +0000 UTC m=+70.324312479" watchObservedRunningTime="2025-11-24 02:26:01.654522509 +0000 UTC m=+70.325234663"
	Nov 24 02:26:02 addons-831846 kubelet[1292]: I1124 02:26:02.649413    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6f6fp" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:26:02 addons-831846 kubelet[1292]: I1124 02:26:02.660686    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-6f6fp" podStartSLOduration=1.360008715 podStartE2EDuration="24.660665187s" podCreationTimestamp="2025-11-24 02:25:38 +0000 UTC" firstStartedPulling="2025-11-24 02:25:38.623878428 +0000 UTC m=+47.294590568" lastFinishedPulling="2025-11-24 02:26:01.924534907 +0000 UTC m=+70.595247040" observedRunningTime="2025-11-24 02:26:02.660414055 +0000 UTC m=+71.331126232" watchObservedRunningTime="2025-11-24 02:26:02.660665187 +0000 UTC m=+71.331377343"
	Nov 24 02:26:03 addons-831846 kubelet[1292]: I1124 02:26:03.659412    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-6f6fp" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:26:03 addons-831846 kubelet[1292]: I1124 02:26:03.659528    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-qnxkh" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:26:04 addons-831846 kubelet[1292]: I1124 02:26:04.662941    1292 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-qnxkh" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 02:26:05 addons-831846 kubelet[1292]: I1124 02:26:05.635340    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-qnxkh" podStartSLOduration=3.070493316 podStartE2EDuration="27.635310838s" podCreationTimestamp="2025-11-24 02:25:38 +0000 UTC" firstStartedPulling="2025-11-24 02:25:38.632226001 +0000 UTC m=+47.302938134" lastFinishedPulling="2025-11-24 02:26:03.197043516 +0000 UTC m=+71.867755656" observedRunningTime="2025-11-24 02:26:03.676125641 +0000 UTC m=+72.346837813" watchObservedRunningTime="2025-11-24 02:26:05.635310838 +0000 UTC m=+74.306023013"
	Nov 24 02:26:07 addons-831846 kubelet[1292]: I1124 02:26:07.689077    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-645nl" podStartSLOduration=50.490421457 podStartE2EDuration="1m8.689055011s" podCreationTimestamp="2025-11-24 02:24:59 +0000 UTC" firstStartedPulling="2025-11-24 02:25:48.427635736 +0000 UTC m=+57.098347870" lastFinishedPulling="2025-11-24 02:26:06.626269274 +0000 UTC m=+75.296981424" observedRunningTime="2025-11-24 02:26:07.688424295 +0000 UTC m=+76.359136474" watchObservedRunningTime="2025-11-24 02:26:07.689055011 +0000 UTC m=+76.359767166"
	Nov 24 02:26:10 addons-831846 kubelet[1292]: E1124 02:26:10.053231    1292 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 24 02:26:10 addons-831846 kubelet[1292]: E1124 02:26:10.053797    1292 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8eac9d40-11df-4d1b-b4ed-05e91b6db498-gcr-creds podName:8eac9d40-11df-4d1b-b4ed-05e91b6db498 nodeName:}" failed. No retries permitted until 2025-11-24 02:26:42.053766221 +0000 UTC m=+110.724478373 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/8eac9d40-11df-4d1b-b4ed-05e91b6db498-gcr-creds") pod "registry-creds-764b6fb674-h45vm" (UID: "8eac9d40-11df-4d1b-b4ed-05e91b6db498") : secret "registry-creds-gcr" not found
	Nov 24 02:26:10 addons-831846 kubelet[1292]: I1124 02:26:10.705958    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-465gm" podStartSLOduration=66.187153472 podStartE2EDuration="1m12.705937731s" podCreationTimestamp="2025-11-24 02:24:58 +0000 UTC" firstStartedPulling="2025-11-24 02:26:03.189677442 +0000 UTC m=+71.860389581" lastFinishedPulling="2025-11-24 02:26:09.708461703 +0000 UTC m=+78.379173840" observedRunningTime="2025-11-24 02:26:10.704683128 +0000 UTC m=+79.375395325" watchObservedRunningTime="2025-11-24 02:26:10.705937731 +0000 UTC m=+79.376649883"
	Nov 24 02:26:11 addons-831846 kubelet[1292]: I1124 02:26:11.466526    1292 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 24 02:26:11 addons-831846 kubelet[1292]: I1124 02:26:11.466589    1292 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 24 02:26:11 addons-831846 kubelet[1292]: I1124 02:26:11.710875    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-g9pxh" podStartSLOduration=65.474246131 podStartE2EDuration="1m6.710854009s" podCreationTimestamp="2025-11-24 02:25:05 +0000 UTC" firstStartedPulling="2025-11-24 02:26:10.378201176 +0000 UTC m=+79.048913319" lastFinishedPulling="2025-11-24 02:26:11.61480905 +0000 UTC m=+80.285521197" observedRunningTime="2025-11-24 02:26:11.709673702 +0000 UTC m=+80.380385877" watchObservedRunningTime="2025-11-24 02:26:11.710854009 +0000 UTC m=+80.381566166"
	Nov 24 02:26:14 addons-831846 kubelet[1292]: I1124 02:26:14.731717    1292 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-lmkkf" podStartSLOduration=1.041292257 podStartE2EDuration="36.731695806s" podCreationTimestamp="2025-11-24 02:25:38 +0000 UTC" firstStartedPulling="2025-11-24 02:25:38.618341391 +0000 UTC m=+47.289053526" lastFinishedPulling="2025-11-24 02:26:14.308744935 +0000 UTC m=+82.979457075" observedRunningTime="2025-11-24 02:26:14.730234294 +0000 UTC m=+83.400946490" watchObservedRunningTime="2025-11-24 02:26:14.731695806 +0000 UTC m=+83.402407961"
	Nov 24 02:26:17 addons-831846 kubelet[1292]: I1124 02:26:17.306344    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1f1ea0f0-3e69-4c29-a085-19c46e304737-gcp-creds\") pod \"busybox\" (UID: \"1f1ea0f0-3e69-4c29-a085-19c46e304737\") " pod="default/busybox"
	Nov 24 02:26:17 addons-831846 kubelet[1292]: I1124 02:26:17.306392    1292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kzh9\" (UniqueName: \"kubernetes.io/projected/1f1ea0f0-3e69-4c29-a085-19c46e304737-kube-api-access-8kzh9\") pod \"busybox\" (UID: \"1f1ea0f0-3e69-4c29-a085-19c46e304737\") " pod="default/busybox"
	
	
	==> storage-provisioner [6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03] <==
	W1124 02:26:02.792270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:04.795324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:04.817874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:06.820723       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:06.823713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:08.827056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:08.830962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:10.834126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:10.838153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:12.841866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:12.846161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:14.848580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:14.851642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:16.854195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:16.857960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:18.884267       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:19.069944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:21.073061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:21.077871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:23.080829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:23.084489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:25.087478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:25.092287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:27.095405       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:26:27.099054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-831846 -n addons-831846
helpers_test.go:269: (dbg) Run:  kubectl --context addons-831846 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-create-xhfc6 gcp-auth-certs-patch-8j74d ingress-nginx-admission-create-lkj84 ingress-nginx-admission-patch-sd9fv registry-creds-764b6fb674-h45vm
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-831846 describe pod gcp-auth-certs-create-xhfc6 gcp-auth-certs-patch-8j74d ingress-nginx-admission-create-lkj84 ingress-nginx-admission-patch-sd9fv registry-creds-764b6fb674-h45vm
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-831846 describe pod gcp-auth-certs-create-xhfc6 gcp-auth-certs-patch-8j74d ingress-nginx-admission-create-lkj84 ingress-nginx-admission-patch-sd9fv registry-creds-764b6fb674-h45vm: exit status 1 (63.541651ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-xhfc6" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-8j74d" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-lkj84" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-sd9fv" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-h45vm" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-831846 describe pod gcp-auth-certs-create-xhfc6 gcp-auth-certs-patch-8j74d ingress-nginx-admission-create-lkj84 ingress-nginx-admission-patch-sd9fv registry-creds-764b6fb674-h45vm: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable headlamp --alsologtostderr -v=1: exit status 11 (249.857888ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:26:27.925562  359541 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:26:27.925839  359541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:27.925850  359541 out.go:374] Setting ErrFile to fd 2...
	I1124 02:26:27.925855  359541 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:27.926103  359541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:26:27.926422  359541 mustload.go:66] Loading cluster: addons-831846
	I1124 02:26:27.926766  359541 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:27.926785  359541 addons.go:622] checking whether the cluster is paused
	I1124 02:26:27.926905  359541 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:27.926924  359541 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:26:27.927340  359541 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:26:27.944736  359541 ssh_runner.go:195] Run: systemctl --version
	I1124 02:26:27.944778  359541 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:26:27.961579  359541 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:26:28.058745  359541 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:26:28.058810  359541 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:26:28.089067  359541 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:26:28.089095  359541 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:26:28.089100  359541 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:26:28.089103  359541 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:26:28.089106  359541 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:26:28.089109  359541 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:26:28.089112  359541 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:26:28.089115  359541 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:26:28.089117  359541 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:26:28.089123  359541 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:26:28.089126  359541 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:26:28.089129  359541 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:26:28.089132  359541 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:26:28.089135  359541 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:26:28.089137  359541 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:26:28.089145  359541 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:26:28.089148  359541 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:26:28.089152  359541 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:26:28.089155  359541 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:26:28.089158  359541 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:26:28.089161  359541 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:26:28.089164  359541 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:26:28.089166  359541 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:26:28.089169  359541 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:26:28.089172  359541 cri.go:89] found id: ""
	I1124 02:26:28.089209  359541 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:26:28.102719  359541 out.go:203] 
	W1124 02:26:28.103977  359541 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:26:28.104000  359541 out.go:285] * 
	* 
	W1124 02:26:28.107874  359541 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:26:28.109325  359541 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.47s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-wf4l7" [9794ab5b-3cf5-4e9f-bae5-ee15d93d46ac] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.007413581s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (248.115059ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:26:43.325736  361361 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:26:43.325998  361361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:43.326006  361361 out.go:374] Setting ErrFile to fd 2...
	I1124 02:26:43.326011  361361 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:43.326175  361361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:26:43.326432  361361 mustload.go:66] Loading cluster: addons-831846
	I1124 02:26:43.326729  361361 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:43.326745  361361 addons.go:622] checking whether the cluster is paused
	I1124 02:26:43.326832  361361 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:43.326846  361361 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:26:43.327211  361361 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:26:43.345811  361361 ssh_runner.go:195] Run: systemctl --version
	I1124 02:26:43.345876  361361 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:26:43.361908  361361 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:26:43.464844  361361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:26:43.464927  361361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:26:43.493116  361361 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:26:43.493138  361361 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:26:43.493143  361361 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:26:43.493147  361361 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:26:43.493152  361361 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:26:43.493156  361361 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:26:43.493176  361361 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:26:43.493184  361361 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:26:43.493189  361361 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:26:43.493197  361361 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:26:43.493204  361361 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:26:43.493209  361361 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:26:43.493215  361361 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:26:43.493220  361361 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:26:43.493226  361361 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:26:43.493250  361361 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:26:43.493261  361361 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:26:43.493266  361361 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:26:43.493271  361361 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:26:43.493274  361361 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:26:43.493278  361361 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:26:43.493282  361361 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:26:43.493288  361361 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:26:43.493294  361361 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:26:43.493302  361361 cri.go:89] found id: ""
	I1124 02:26:43.493358  361361 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:26:43.506834  361361 out.go:203] 
	W1124 02:26:43.507958  361361 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:26:43.507984  361361 out.go:285] * 
	* 
	W1124 02:26:43.512919  361361 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:26:43.514119  361361 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (11.09s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-831846 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-831846 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-831846 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [9a5fa4ec-f23f-4212-9bd0-b6abd7b50ef5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [9a5fa4ec-f23f-4212-9bd0-b6abd7b50ef5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [9a5fa4ec-f23f-4212-9bd0-b6abd7b50ef5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.002943918s
addons_test.go:967: (dbg) Run:  kubectl --context addons-831846 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 ssh "cat /opt/local-path-provisioner/pvc-f6a25546-e26a-4b70-882a-bd7b6a6cd688_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-831846 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-831846 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (254.998999ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:26:48.098808  361828 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:26:48.099091  361828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:48.099101  361828 out.go:374] Setting ErrFile to fd 2...
	I1124 02:26:48.099105  361828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:48.099338  361828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:26:48.099649  361828 mustload.go:66] Loading cluster: addons-831846
	I1124 02:26:48.100038  361828 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:48.100062  361828 addons.go:622] checking whether the cluster is paused
	I1124 02:26:48.100186  361828 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:48.100204  361828 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:26:48.100678  361828 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:26:48.118223  361828 ssh_runner.go:195] Run: systemctl --version
	I1124 02:26:48.118286  361828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:26:48.135522  361828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:26:48.231103  361828 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:26:48.231174  361828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:26:48.259262  361828 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:26:48.259284  361828 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:26:48.259288  361828 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:26:48.259292  361828 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:26:48.259295  361828 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:26:48.259298  361828 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:26:48.259301  361828 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:26:48.259303  361828 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:26:48.259306  361828 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:26:48.259311  361828 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:26:48.259314  361828 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:26:48.259317  361828 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:26:48.259319  361828 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:26:48.259322  361828 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:26:48.259325  361828 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:26:48.259338  361828 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:26:48.259346  361828 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:26:48.259349  361828 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:26:48.259352  361828 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:26:48.259355  361828 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:26:48.259357  361828 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:26:48.259360  361828 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:26:48.259363  361828 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:26:48.259365  361828 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:26:48.259368  361828 cri.go:89] found id: ""
	I1124 02:26:48.259402  361828 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:26:48.274144  361828 out.go:203] 
	W1124 02:26:48.275172  361828 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:48Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:48Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:26:48.275197  361828 out.go:285] * 
	* 
	W1124 02:26:48.280090  361828 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:26:48.281460  361828 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (11.09s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-gf6tr" [59b3a9aa-53dd-4231-97bd-2015d666639c] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003517012s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (239.935069ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:26:33.173775  359788 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:26:33.174048  359788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:33.174059  359788 out.go:374] Setting ErrFile to fd 2...
	I1124 02:26:33.174066  359788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:33.174245  359788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:26:33.174530  359788 mustload.go:66] Loading cluster: addons-831846
	I1124 02:26:33.174854  359788 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:33.174873  359788 addons.go:622] checking whether the cluster is paused
	I1124 02:26:33.174988  359788 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:33.175031  359788 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:26:33.175429  359788 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:26:33.193114  359788 ssh_runner.go:195] Run: systemctl --version
	I1124 02:26:33.193165  359788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:26:33.210070  359788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:26:33.306806  359788 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:26:33.306863  359788 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:26:33.334412  359788 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:26:33.334443  359788 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:26:33.334447  359788 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:26:33.334452  359788 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:26:33.334455  359788 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:26:33.334458  359788 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:26:33.334462  359788 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:26:33.334464  359788 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:26:33.334468  359788 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:26:33.334480  359788 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:26:33.334488  359788 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:26:33.334493  359788 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:26:33.334502  359788 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:26:33.334506  359788 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:26:33.334513  359788 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:26:33.334527  359788 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:26:33.334534  359788 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:26:33.334539  359788 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:26:33.334541  359788 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:26:33.334544  359788 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:26:33.334547  359788 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:26:33.334549  359788 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:26:33.334552  359788 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:26:33.334567  359788 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:26:33.334575  359788 cri.go:89] found id: ""
	I1124 02:26:33.334634  359788 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:26:33.347957  359788 out.go:203] 
	W1124 02:26:33.349071  359788 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:33Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:33Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:26:33.349086  359788 out.go:285] * 
	* 
	W1124 02:26:33.352920  359788 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:26:33.354131  359788 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.24s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-2mtd5" [512e504d-5575-4891-a8c8-f02133d6f444] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00254719s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable yakd --alsologtostderr -v=1: exit status 11 (245.092385ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:26:39.418077  360366 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:26:39.418343  360366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:39.418352  360366 out.go:374] Setting ErrFile to fd 2...
	I1124 02:26:39.418357  360366 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:39.418565  360366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:26:39.418864  360366 mustload.go:66] Loading cluster: addons-831846
	I1124 02:26:39.419339  360366 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:39.419365  360366 addons.go:622] checking whether the cluster is paused
	I1124 02:26:39.419499  360366 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:39.419516  360366 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:26:39.420082  360366 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:26:39.440232  360366 ssh_runner.go:195] Run: systemctl --version
	I1124 02:26:39.440296  360366 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:26:39.459482  360366 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:26:39.556938  360366 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:26:39.557018  360366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:26:39.584231  360366 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:26:39.584265  360366 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:26:39.584270  360366 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:26:39.584273  360366 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:26:39.584276  360366 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:26:39.584281  360366 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:26:39.584284  360366 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:26:39.584287  360366 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:26:39.584290  360366 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:26:39.584302  360366 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:26:39.584311  360366 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:26:39.584326  360366 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:26:39.584331  360366 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:26:39.584338  360366 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:26:39.584343  360366 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:26:39.584362  360366 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:26:39.584372  360366 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:26:39.584379  360366 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:26:39.584383  360366 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:26:39.584388  360366 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:26:39.584396  360366 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:26:39.584400  360366 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:26:39.584408  360366 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:26:39.584413  360366 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:26:39.584421  360366 cri.go:89] found id: ""
	I1124 02:26:39.584469  360366 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:26:39.597547  360366 out.go:203] 
	W1124 02:26:39.598592  360366 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:26:39.598608  360366 out.go:285] * 
	* 
	W1124 02:26:39.602505  360366 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:26:39.603640  360366 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-6f6fp" [f718fb18-20f0-4f35-931c-3a308c345e99] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.002833906s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-831846 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-831846 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (241.160805ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:26:37.011449  359999 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:26:37.011716  359999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:37.011726  359999 out.go:374] Setting ErrFile to fd 2...
	I1124 02:26:37.011730  359999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:26:37.011988  359999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:26:37.012258  359999 mustload.go:66] Loading cluster: addons-831846
	I1124 02:26:37.012662  359999 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:37.012682  359999 addons.go:622] checking whether the cluster is paused
	I1124 02:26:37.012800  359999 config.go:182] Loaded profile config "addons-831846": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:26:37.012815  359999 host.go:66] Checking if "addons-831846" exists ...
	I1124 02:26:37.013210  359999 cli_runner.go:164] Run: docker container inspect addons-831846 --format={{.State.Status}}
	I1124 02:26:37.031554  359999 ssh_runner.go:195] Run: systemctl --version
	I1124 02:26:37.031604  359999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-831846
	I1124 02:26:37.048323  359999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/addons-831846/id_rsa Username:docker}
	I1124 02:26:37.145087  359999 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 02:26:37.145166  359999 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 02:26:37.174366  359999 cri.go:89] found id: "6a692d55674d0ae5522ecac358e2f81666db9cdb0147ea3e92508e31fe6a52ea"
	I1124 02:26:37.174395  359999 cri.go:89] found id: "80aafa11113da411b051b44041cd708bbc020911694598e12d3e085a5e407007"
	I1124 02:26:37.174400  359999 cri.go:89] found id: "ab306614f72c476856e770b6b9713b04277f6305520c77c09ef542d147a3fcd2"
	I1124 02:26:37.174405  359999 cri.go:89] found id: "87464323b09157b9b5a46ff729f9f0602870fcff0e1c51bdc2c86e106e35a579"
	I1124 02:26:37.174409  359999 cri.go:89] found id: "5b6dca54cd1ba0d9ddd2af3734858ec5cea797a59e717c0e5a8b22fb469037bb"
	I1124 02:26:37.174414  359999 cri.go:89] found id: "cb685f958daeeb46824185696f8e4a4cca438642c09bcca0dfcab78e0c721429"
	I1124 02:26:37.174419  359999 cri.go:89] found id: "d36d11b30d634a3e3b577c85cd3eb385f29792cee3ecfd29ee8a7bef0b0986d2"
	I1124 02:26:37.174423  359999 cri.go:89] found id: "488f8f5aecccf7751bb00aa7333d918dc56e12e6781148e2380becbe4cdd88ce"
	I1124 02:26:37.174428  359999 cri.go:89] found id: "3a2f505270ed3a6eaf1812c4b0135925e041b3dfa38eb897ecbe087b3b4025b3"
	I1124 02:26:37.174444  359999 cri.go:89] found id: "98efe6015f81da844e0221e4d833ec99ae8161a0e63544288c3bf235ff3c1138"
	I1124 02:26:37.174452  359999 cri.go:89] found id: "c290117d4f470d112a6b1b2506ec62f9239b92b45a3db7c03c10ba96fe17f7e5"
	I1124 02:26:37.174457  359999 cri.go:89] found id: "1a1040cba982875ca6db2f724e2e1c9887372e112652d4574f88f7fcd92f7053"
	I1124 02:26:37.174462  359999 cri.go:89] found id: "7315919ab4d42b949acb320b9348bb52b398ab1f883af6c63fa329b679cd7c89"
	I1124 02:26:37.174469  359999 cri.go:89] found id: "f1c6f9362048314bcd00cdd9717c191e062073b4467422e27153d31ad0a4b5e5"
	I1124 02:26:37.174475  359999 cri.go:89] found id: "ab2275b7d143a7313203e68ef891c132a8cb906438686dbacd457e6933b55145"
	I1124 02:26:37.174499  359999 cri.go:89] found id: "211b6a7c5a0f79cab95320f54543d91065518486b697f431212b0e32f6a781d8"
	I1124 02:26:37.174509  359999 cri.go:89] found id: "6be1f10bddb9dd3ea35c8982b8924753b6e76a79253a4f4cca91f799bb876c03"
	I1124 02:26:37.174515  359999 cri.go:89] found id: "109ca0df89d74a0f10b7a0524a460e1c6cd808036fbc2534abe5c2e1e5995314"
	I1124 02:26:37.174519  359999 cri.go:89] found id: "aac60890d17a3d88a2fe1827cd2a316b6b37dee75fd9b70a7e2cf466f2c0e8cf"
	I1124 02:26:37.174522  359999 cri.go:89] found id: "837f7d173b2d62ff4964c15cf6d4e9baec2b608df2bb55350f131b9e2e0dd7bd"
	I1124 02:26:37.174526  359999 cri.go:89] found id: "3949ef8e07cb29ed3112de485488c6567076f9ef1bb4d1842805813c9aed4b76"
	I1124 02:26:37.174530  359999 cri.go:89] found id: "9c9ff4c6ef4b7d9af5a43d377e2c85f13292386c88019fa60554f13cbb953a90"
	I1124 02:26:37.174533  359999 cri.go:89] found id: "a1f1f1012890921c335063f4c178ed2e0a7c69711cf3d711e9fef8ba043a3acf"
	I1124 02:26:37.174537  359999 cri.go:89] found id: "8704fbfea0bb053a132004d9ac3de84baa5bdc3ca268f24a5cce994de9d3b971"
	I1124 02:26:37.174541  359999 cri.go:89] found id: ""
	I1124 02:26:37.174599  359999 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 02:26:37.187846  359999 out.go:203] 
	W1124 02:26:37.188881  359999 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:26:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 02:26:37.188925  359999 out.go:285] * 
	* 
	W1124 02:26:37.193178  359999 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 02:26:37.194319  359999 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-831846 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-333040 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-333040 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-mjsp7" [f4a04e6d-bb36-4220-b0fd-298515934bf0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-333040 -n functional-333040
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-24 02:42:08.069340861 +0000 UTC m=+1087.121194213
functional_test.go:1645: (dbg) Run:  kubectl --context functional-333040 describe po hello-node-connect-7d85dfc575-mjsp7 -n default
functional_test.go:1645: (dbg) kubectl --context functional-333040 describe po hello-node-connect-7d85dfc575-mjsp7 -n default:
Name:             hello-node-connect-7d85dfc575-mjsp7
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-333040/192.168.49.2
Start Time:       Mon, 24 Nov 2025 02:32:07 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hksh5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hksh5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-mjsp7 to functional-333040
Normal   Pulling    6m58s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m58s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-333040 logs hello-node-connect-7d85dfc575-mjsp7 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-333040 logs hello-node-connect-7d85dfc575-mjsp7 -n default: exit status 1 (62.8603ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-mjsp7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-333040 logs hello-node-connect-7d85dfc575-mjsp7 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-333040 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-mjsp7
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-333040/192.168.49.2
Start Time:       Mon, 24 Nov 2025 02:32:07 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hksh5 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-hksh5:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-mjsp7 to functional-333040
Normal   Pulling    6m58s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m58s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m45s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-333040 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-333040 logs -l app=hello-node-connect: exit status 1 (57.409583ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-mjsp7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-333040 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-333040 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.243.187
IPs:                      10.99.243.187
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31046/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-333040
helpers_test.go:243: (dbg) docker inspect functional-333040:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "776515e8e530bcaebc23b39fdbc1497bede8b5865266e2450a6dafad1e592a23",
	        "Created": "2025-11-24T02:30:26.559833048Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 372806,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T02:30:26.589091591Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/776515e8e530bcaebc23b39fdbc1497bede8b5865266e2450a6dafad1e592a23/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/776515e8e530bcaebc23b39fdbc1497bede8b5865266e2450a6dafad1e592a23/hostname",
	        "HostsPath": "/var/lib/docker/containers/776515e8e530bcaebc23b39fdbc1497bede8b5865266e2450a6dafad1e592a23/hosts",
	        "LogPath": "/var/lib/docker/containers/776515e8e530bcaebc23b39fdbc1497bede8b5865266e2450a6dafad1e592a23/776515e8e530bcaebc23b39fdbc1497bede8b5865266e2450a6dafad1e592a23-json.log",
	        "Name": "/functional-333040",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-333040:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-333040",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "776515e8e530bcaebc23b39fdbc1497bede8b5865266e2450a6dafad1e592a23",
	                "LowerDir": "/var/lib/docker/overlay2/7ef89622338a02509d944f01cfdd6126c35e75f4772ccb9cca6fa5e66b8475b2-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7ef89622338a02509d944f01cfdd6126c35e75f4772ccb9cca6fa5e66b8475b2/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7ef89622338a02509d944f01cfdd6126c35e75f4772ccb9cca6fa5e66b8475b2/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7ef89622338a02509d944f01cfdd6126c35e75f4772ccb9cca6fa5e66b8475b2/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-333040",
	                "Source": "/var/lib/docker/volumes/functional-333040/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-333040",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-333040",
	                "name.minikube.sigs.k8s.io": "functional-333040",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "2f5d486bd39b8f243f3ed933a8bfdcaab1f0423accd65f787aa35269de741261",
	            "SandboxKey": "/var/run/docker/netns/2f5d486bd39b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-333040": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "af8ea79a521d6f2c2d6972bbd191394cf9f3bbea7995751a470c65510167ffed",
	                    "EndpointID": "8f4a534d6074162afbc62478a6d9edee7ce914268da4eb97ddbd457628e8586c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "5e:c8:cd:48:fd:25",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-333040",
	                        "776515e8e530"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-333040 -n functional-333040
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-333040 logs -n 25: (1.177152229s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-333040 ssh findmnt -T /mount-9p | grep 9p                                                              │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ ssh            │ functional-333040 ssh findmnt -T /mount-9p | grep 9p                                                              │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh            │ functional-333040 ssh -- ls -la /mount-9p                                                                         │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh            │ functional-333040 ssh sudo umount -f /mount-9p                                                                    │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ mount          │ -p functional-333040 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516723937/001:/mount2 --alsologtostderr -v=1 │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ mount          │ -p functional-333040 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516723937/001:/mount3 --alsologtostderr -v=1 │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ mount          │ -p functional-333040 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516723937/001:/mount1 --alsologtostderr -v=1 │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ ssh            │ functional-333040 ssh findmnt -T /mount1                                                                          │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ ssh            │ functional-333040 ssh findmnt -T /mount1                                                                          │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh            │ functional-333040 ssh findmnt -T /mount2                                                                          │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh            │ functional-333040 ssh findmnt -T /mount3                                                                          │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ mount          │ -p functional-333040 --kill=true                                                                                  │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ start          │ -p functional-333040 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio         │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ start          │ -p functional-333040 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                   │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-333040 --alsologtostderr -v=1                                                    │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ update-context │ functional-333040 update-context --alsologtostderr -v=2                                                           │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ update-context │ functional-333040 update-context --alsologtostderr -v=2                                                           │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ update-context │ functional-333040 update-context --alsologtostderr -v=2                                                           │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ image          │ functional-333040 image ls --format short --alsologtostderr                                                       │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ image          │ functional-333040 image ls --format yaml --alsologtostderr                                                        │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ image          │ functional-333040 image ls --format json --alsologtostderr                                                        │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ image          │ functional-333040 image ls --format table --alsologtostderr                                                       │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ ssh            │ functional-333040 ssh pgrep buildkitd                                                                             │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │                     │
	│ image          │ functional-333040 image build -t localhost/my-image:functional-333040 testdata/build --alsologtostderr            │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	│ image          │ functional-333040 image ls                                                                                        │ functional-333040 │ jenkins │ v1.37.0 │ 24 Nov 25 02:32 UTC │ 24 Nov 25 02:32 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:32:25
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:32:25.992739  388283 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:32:25.992837  388283 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:25.992845  388283 out.go:374] Setting ErrFile to fd 2...
	I1124 02:32:25.992849  388283 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:25.993035  388283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:32:25.993411  388283 out.go:368] Setting JSON to false
	I1124 02:32:25.994344  388283 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4493,"bootTime":1763947053,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:32:25.994395  388283 start.go:143] virtualization: kvm guest
	I1124 02:32:25.995769  388283 out.go:179] * [functional-333040] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:32:25.997474  388283 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:32:25.997508  388283 notify.go:221] Checking for updates...
	I1124 02:32:25.999675  388283 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:32:26.000802  388283 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 02:32:26.005307  388283 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 02:32:26.006326  388283 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:32:26.007316  388283 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:32:26.008766  388283 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:32:26.009327  388283 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:32:26.037754  388283 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:32:26.037897  388283 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:32:26.096507  388283 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:26.08649988 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:32:26.096601  388283 docker.go:319] overlay module found
	I1124 02:32:26.098130  388283 out.go:179] * Using the docker driver based on existing profile
	I1124 02:32:26.100802  388283 start.go:309] selected driver: docker
	I1124 02:32:26.100821  388283 start.go:927] validating driver "docker" against &{Name:functional-333040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-333040 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:32:26.100951  388283 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:32:26.101054  388283 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:32:26.159062  388283 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:26.149235542 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:32:26.159669  388283 cni.go:84] Creating CNI manager for ""
	I1124 02:32:26.159730  388283 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 02:32:26.159775  388283 start.go:353] cluster config:
	{Name:functional-333040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-333040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:32:26.161825  388283 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 24 02:32:31 functional-333040 crio[3570]: time="2025-11-24T02:32:31.883420406Z" level=info msg="Starting container: 0690e3f95852ea979407b3eb2fff416b976df80fe1cc894096fc6b1f5dd8ec6c" id=09fe4724-f365-4612-8e86-d6c32ca7dc7a name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 02:32:31 functional-333040 crio[3570]: time="2025-11-24T02:32:31.885495298Z" level=info msg="Started container" PID=8122 containerID=0690e3f95852ea979407b3eb2fff416b976df80fe1cc894096fc6b1f5dd8ec6c description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tpz55/kubernetes-dashboard id=09fe4724-f365-4612-8e86-d6c32ca7dc7a name=/runtime.v1.RuntimeService/StartContainer sandboxID=e7702fb4f33c9852a2f5a5886b6dea4166f3b8f68c5b7c5568be77649c739d56
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.722226884Z" level=info msg="Stopping pod sandbox: ed6189468918654298444fd9dd68c9c93f410621a893e119d519b2a586755ba2" id=c71b6221-8bfd-428e-aa74-a48b0e8189dc name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.722290656Z" level=info msg="Stopped pod sandbox (already stopped): ed6189468918654298444fd9dd68c9c93f410621a893e119d519b2a586755ba2" id=c71b6221-8bfd-428e-aa74-a48b0e8189dc name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.722740759Z" level=info msg="Removing pod sandbox: ed6189468918654298444fd9dd68c9c93f410621a893e119d519b2a586755ba2" id=461ed583-52d6-4645-a817-dd0980114c15 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.738666145Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.738736489Z" level=info msg="Removed pod sandbox: ed6189468918654298444fd9dd68c9c93f410621a893e119d519b2a586755ba2" id=461ed583-52d6-4645-a817-dd0980114c15 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.73929402Z" level=info msg="Stopping pod sandbox: 6dfc1f6026da111d23e0551844a6797dcb9383d8a61285328d809e8192d8950c" id=0e2bd5da-9e0a-4b7e-b5d0-21dbd02ec9a4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.739347532Z" level=info msg="Stopped pod sandbox (already stopped): 6dfc1f6026da111d23e0551844a6797dcb9383d8a61285328d809e8192d8950c" id=0e2bd5da-9e0a-4b7e-b5d0-21dbd02ec9a4 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.73969902Z" level=info msg="Removing pod sandbox: 6dfc1f6026da111d23e0551844a6797dcb9383d8a61285328d809e8192d8950c" id=843bb63e-db37-41e5-8a16-b7fea7cf5ea9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.761775443Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.761835985Z" level=info msg="Removed pod sandbox: 6dfc1f6026da111d23e0551844a6797dcb9383d8a61285328d809e8192d8950c" id=843bb63e-db37-41e5-8a16-b7fea7cf5ea9 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.762214266Z" level=info msg="Stopping pod sandbox: b164b34b469901de15969ac7019203e033d8b9dbfcc0df1d8bbf8089289f8277" id=d653b5c9-c536-4c4b-aa7f-2c68b3d0e848 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.762273233Z" level=info msg="Stopped pod sandbox (already stopped): b164b34b469901de15969ac7019203e033d8b9dbfcc0df1d8bbf8089289f8277" id=d653b5c9-c536-4c4b-aa7f-2c68b3d0e848 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.762601689Z" level=info msg="Removing pod sandbox: b164b34b469901de15969ac7019203e033d8b9dbfcc0df1d8bbf8089289f8277" id=bde2a1d0-796c-4107-b98f-5883c87c6d9f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.799254781Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 02:32:37 functional-333040 crio[3570]: time="2025-11-24T02:32:37.799361012Z" level=info msg="Removed pod sandbox: b164b34b469901de15969ac7019203e033d8b9dbfcc0df1d8bbf8089289f8277" id=bde2a1d0-796c-4107-b98f-5883c87c6d9f name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 02:32:49 functional-333040 crio[3570]: time="2025-11-24T02:32:49.734080718Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ecfe10f6-dcc1-4aaa-beab-1adaecfdaba1 name=/runtime.v1.ImageService/PullImage
	Nov 24 02:32:52 functional-333040 crio[3570]: time="2025-11-24T02:32:52.734494315Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=49685e9d-b16d-48b1-9d3b-0573c9e13af6 name=/runtime.v1.ImageService/PullImage
	Nov 24 02:33:39 functional-333040 crio[3570]: time="2025-11-24T02:33:39.734064586Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=30e0b3bd-5e20-4dbd-a592-759b8176746b name=/runtime.v1.ImageService/PullImage
	Nov 24 02:33:44 functional-333040 crio[3570]: time="2025-11-24T02:33:44.733580537Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4c2344b4-a026-4385-89a8-4f95968790bd name=/runtime.v1.ImageService/PullImage
	Nov 24 02:35:10 functional-333040 crio[3570]: time="2025-11-24T02:35:10.734005508Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=7ba6b4e3-d632-4e11-ae99-e7b9b3e26db3 name=/runtime.v1.ImageService/PullImage
	Nov 24 02:35:18 functional-333040 crio[3570]: time="2025-11-24T02:35:18.734138395Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=acf4b687-32c2-4769-8721-c3482418a389 name=/runtime.v1.ImageService/PullImage
	Nov 24 02:37:52 functional-333040 crio[3570]: time="2025-11-24T02:37:52.734120505Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6400e31e-230d-4701-9c7e-757a830d6e6c name=/runtime.v1.ImageService/PullImage
	Nov 24 02:38:05 functional-333040 crio[3570]: time="2025-11-24T02:38:05.733987433Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f4ab050b-df4f-4237-944c-8cc7ec9b7d6e name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	0690e3f95852e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   e7702fb4f33c9       kubernetes-dashboard-855c9754f9-tpz55        kubernetes-dashboard
	5d838bb974e59       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   a108430eef4e7       dashboard-metrics-scraper-77bf4d6c4c-jc9hr   kubernetes-dashboard
	f023547f28649       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   6de9f53c4ea34       busybox-mount                                default
	c9a42ed21151c       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   e1e679d27c7ed       sp-pod                                       default
	c78b58d3c721a       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   07b067f81dee9       nginx-svc                                    default
	eb5a1aa4ec912       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   2e8bd7fc1319b       mysql-5bb876957f-xgm98                       default
	ae5008a9158f1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   feac7e2af5cb6       kube-apiserver-functional-333040             kube-system
	2a683c5822d53       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   8446f6be08429       kube-controller-manager-functional-333040    kube-system
	dce144aaf9472       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   df8be81f72287       kube-scheduler-functional-333040             kube-system
	16cadc848fc5d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   c78d46d4ed04b       etcd-functional-333040                       kube-system
	e7c06e71d5cc2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   8446f6be08429       kube-controller-manager-functional-333040    kube-system
	ba76d1379733f       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   ea143670cadb3       kube-proxy-nmftl                             kube-system
	0a0a27764ffea       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   2ad3617423f8b       storage-provisioner                          kube-system
	865c73d80cd13       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   382804916bd64       coredns-66bc5c9577-kknvb                     kube-system
	9cac1ac33a914       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   94acdf4a3b3c8       kindnet-5p8xq                                kube-system
	1102141be4f6e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   382804916bd64       coredns-66bc5c9577-kknvb                     kube-system
	84c9df79230c5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   2ad3617423f8b       storage-provisioner                          kube-system
	c31b4909ec259       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   94acdf4a3b3c8       kindnet-5p8xq                                kube-system
	abc7b95d53bd4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   ea143670cadb3       kube-proxy-nmftl                             kube-system
	e5bf5a7e20a2f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   c78d46d4ed04b       etcd-functional-333040                       kube-system
	c7db1a15ce4f1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   df8be81f72287       kube-scheduler-functional-333040             kube-system
	
	
	==> coredns [1102141be4f6ed79101e1cdd27b38c0e8b20c6aef535eec645c2fa1bed29e5f5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58777 - 36309 "HINFO IN 90647842334957267.3316369986599525363. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.464379774s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [865c73d80cd13b52f01b9cb9d94dfb8e90e13a769368cad88ce818bc66004c80] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59927 - 15490 "HINFO IN 3174280128817255681.1069927294273941076. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069753312s
	
	
	==> describe nodes <==
	Name:               functional-333040
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-333040
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=functional-333040
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T02_30_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 02:30:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-333040
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 02:42:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 02:41:52 +0000   Mon, 24 Nov 2025 02:30:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 02:41:52 +0000   Mon, 24 Nov 2025 02:30:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 02:41:52 +0000   Mon, 24 Nov 2025 02:30:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 02:41:52 +0000   Mon, 24 Nov 2025 02:30:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-333040
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                101fcc7b-e29a-4272-9229-269c796136b3
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-8rm82                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  default                     hello-node-connect-7d85dfc575-mjsp7           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-xgm98                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
	  kube-system                 coredns-66bc5c9577-kknvb                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-333040                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-5p8xq                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-333040              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-333040     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-nmftl                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-333040              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-jc9hr    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tpz55         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-333040 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-333040 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-333040 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-333040 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-333040 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-333040 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-333040 event: Registered Node functional-333040 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-333040 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-333040 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-333040 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-333040 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-333040 event: Registered Node functional-333040 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 a4 5e 1f c0 90 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 ca fc 5f 92 50 08 06
	[Nov24 02:26] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.010203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023866] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +2.047771] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[Nov24 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +8.191144] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[ +16.382391] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[ +32.252621] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	
	
	==> etcd [16cadc848fc5d184feda5dc3919e0f59b2a8883ffc45939aa2d8b0d1e943188d] <==
	{"level":"warn","ts":"2025-11-24T02:31:39.155562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.162490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.168329Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.175357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.181275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.188537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.194264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.200556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.206877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.214050Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33536","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.224262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.231076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.237657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.243362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.249440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.255998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.263400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.269850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.292208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.298732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.304388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:31:39.344815Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33766","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T02:41:38.882680Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1175}
	{"level":"info","ts":"2025-11-24T02:41:38.901119Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1175,"took":"17.960795ms","hash":3927727404,"current-db-size-bytes":3440640,"current-db-size":"3.4 MB","current-db-size-in-use-bytes":1556480,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-11-24T02:41:38.901154Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3927727404,"revision":1175,"compact-revision":-1}
	
	
	==> etcd [e5bf5a7e20a2f560c4c86b4b64ce44d823a89da55c510180b809ecb4e0e81fca] <==
	{"level":"warn","ts":"2025-11-24T02:30:35.424873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:35.431585Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:35.437388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:35.443155Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:35.463631Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:35.469538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T02:30:35.517570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58314","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T02:31:18.631968Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T02:31:18.632052Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-333040","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-24T02:31:18.632146Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T02:31:25.633295Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T02:31:25.637329Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:25.637379Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-11-24T02:31:25.637402Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-11-24T02:31:25.637430Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-24T02:31:25.637465Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T02:31:25.637475Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T02:31:25.637402Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T02:31:25.637490Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T02:31:25.637496Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:25.637443Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T02:31:25.639331Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-24T02:31:25.639381Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T02:31:25.639404Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-24T02:31:25.639425Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-333040","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 02:42:09 up  1:24,  0 user,  load average: 0.04, 0.23, 0.87
	Linux functional-333040 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9cac1ac33a914f68dd024237cd57527188ad6140eebe72a43cd3c6cbebd4cba6] <==
	I1124 02:40:08.852013       1 main.go:301] handling current node
	I1124 02:40:18.860101       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:40:18.860135       1 main.go:301] handling current node
	I1124 02:40:28.854309       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:40:28.854367       1 main.go:301] handling current node
	I1124 02:40:38.852575       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:40:38.852604       1 main.go:301] handling current node
	I1124 02:40:48.852299       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:40:48.852328       1 main.go:301] handling current node
	I1124 02:40:58.860558       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:40:58.860585       1 main.go:301] handling current node
	I1124 02:41:08.852133       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:41:08.852161       1 main.go:301] handling current node
	I1124 02:41:18.860213       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:41:18.860245       1 main.go:301] handling current node
	I1124 02:41:28.851642       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:41:28.851676       1 main.go:301] handling current node
	I1124 02:41:38.852097       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:41:38.852126       1 main.go:301] handling current node
	I1124 02:41:48.854752       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:41:48.854782       1 main.go:301] handling current node
	I1124 02:41:58.853843       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:41:58.853876       1 main.go:301] handling current node
	I1124 02:42:08.855213       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:42:08.855250       1 main.go:301] handling current node
	
	
	==> kindnet [c31b4909ec259296543c9772b202836805b4df03916538c10d97611ffd112399] <==
	I1124 02:30:44.285016       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 02:30:44.285270       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1124 02:30:44.285405       1 main.go:148] setting mtu 1500 for CNI 
	I1124 02:30:44.285420       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 02:30:44.285441       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T02:30:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 02:30:44.484278       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 02:30:44.484361       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 02:30:44.484761       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 02:30:44.484803       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 02:30:44.685892       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 02:30:44.685918       1 metrics.go:72] Registering metrics
	I1124 02:30:44.685959       1 controller.go:711] "Syncing nftables rules"
	I1124 02:30:54.485022       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:30:54.485087       1 main.go:301] handling current node
	I1124 02:31:04.489233       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:31:04.489272       1 main.go:301] handling current node
	I1124 02:31:14.487510       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 02:31:14.487543       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ae5008a9158f13de12791dc2f1e5efe8c30e2ed785b4620c90e89cd1cf4f097f] <==
	I1124 02:31:40.699527       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 02:31:40.737519       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	W1124 02:31:40.903541       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1124 02:31:40.904563       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 02:31:40.908375       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 02:31:41.571392       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 02:31:41.654629       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 02:31:41.694749       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 02:31:41.700698       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 02:31:43.521121       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 02:31:53.721293       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.96.36.165"}
	I1124 02:31:58.579706       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.110.147.146"}
	I1124 02:32:02.000368       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.97.2.102"}
	I1124 02:32:07.744601       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.243.187"}
	E1124 02:32:11.709811       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60508: use of closed network connection
	E1124 02:32:13.124044       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60538: use of closed network connection
	I1124 02:32:13.291966       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.102.141.209"}
	E1124 02:32:14.212547       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60578: use of closed network connection
	E1124 02:32:16.478269       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:60600: use of closed network connection
	E1124 02:32:17.049631       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38862: use of closed network connection
	E1124 02:32:25.099882       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:38916: use of closed network connection
	I1124 02:32:26.971854       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 02:32:27.066521       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.5.125"}
	I1124 02:32:27.083724       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.74.2"}
	I1124 02:41:39.727210       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [2a683c5822d53091ed92e15655a69eeab53ea1d12df4586d0ab706d8e9d292b7] <==
	I1124 02:31:43.117703       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 02:31:43.117738       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 02:31:43.117772       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 02:31:43.117915       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 02:31:43.117803       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 02:31:43.117966       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 02:31:43.118044       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-333040"
	I1124 02:31:43.118127       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 02:31:43.117794       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 02:31:43.118395       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 02:31:43.119054       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 02:31:43.119076       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 02:31:43.121467       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 02:31:43.121487       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 02:31:43.124275       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 02:31:43.135503       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 02:31:43.136681       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 02:31:43.138930       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 02:31:43.143145       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 02:32:27.015026       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:27.020710       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:27.021907       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:27.023631       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:27.026560       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 02:32:27.030298       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e7c06e71d5cc20fa255c4ce33493ba0d2ef8de4658371728d80572b2921b07a8] <==
	I1124 02:31:19.745005       1 serving.go:386] Generated self-signed cert in-memory
	I1124 02:31:20.357816       1 controllermanager.go:191] "Starting" version="v1.34.1"
	I1124 02:31:20.357835       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:31:20.359276       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1124 02:31:20.359569       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1124 02:31:20.360251       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I1124 02:31:20.360570       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 02:31:20.367565       1 controllermanager.go:781] "Started controller" controller="serviceaccount-token-controller"
	I1124 02:31:20.367621       1 shared_informer.go:349] "Waiting for caches to sync" controller="tokens"
	I1124 02:31:28.248668       1 controllermanager.go:781] "Started controller" controller="resourceclaim-controller"
	I1124 02:31:28.248695       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="device-taint-eviction-controller" requiredFeatureGates=["DynamicResourceAllocation","DRADeviceTaints"]
	I1124 02:31:28.248779       1 controller.go:397] "Starting resource claim controller" logger="resourceclaim-controller"
	I1124 02:31:28.248870       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource_claim"
	F1124 02:31:28.249124       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-proxy [abc7b95d53bd475b1bf20524af05fc6672f27379b40bb4968f5dc1e0e2578912] <==
	I1124 02:30:44.124564       1 server_linux.go:53] "Using iptables proxy"
	I1124 02:30:44.199488       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 02:30:44.299899       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 02:30:44.299943       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 02:30:44.300046       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 02:30:44.317139       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 02:30:44.317189       1 server_linux.go:132] "Using iptables Proxier"
	I1124 02:30:44.321925       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 02:30:44.322230       1 server.go:527] "Version info" version="v1.34.1"
	I1124 02:30:44.322249       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:30:44.323480       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 02:30:44.323509       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 02:30:44.323529       1 config.go:309] "Starting node config controller"
	I1124 02:30:44.323544       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 02:30:44.323681       1 config.go:200] "Starting service config controller"
	I1124 02:30:44.323744       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 02:30:44.323703       1 config.go:106] "Starting endpoint slice config controller"
	I1124 02:30:44.323784       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 02:30:44.423824       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 02:30:44.423855       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 02:30:44.423872       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 02:30:44.423880       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [ba76d1379733f31d6ec231b06076bdb2155ae7e6c2eeaec2e3340f5146070026] <==
	I1124 02:31:19.513065       1 server_linux.go:53] "Using iptables proxy"
	I1124 02:31:19.602430       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 02:31:19.702510       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 02:31:19.702537       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 02:31:19.702598       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 02:31:19.720550       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 02:31:19.720589       1 server_linux.go:132] "Using iptables Proxier"
	I1124 02:31:19.725553       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 02:31:19.725836       1 server.go:527] "Version info" version="v1.34.1"
	I1124 02:31:19.725868       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 02:31:19.727186       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 02:31:19.727207       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 02:31:19.727240       1 config.go:106] "Starting endpoint slice config controller"
	I1124 02:31:19.727246       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 02:31:19.727241       1 config.go:200] "Starting service config controller"
	I1124 02:31:19.727257       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 02:31:19.727260       1 config.go:309] "Starting node config controller"
	I1124 02:31:19.727270       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 02:31:19.727278       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 02:31:19.827808       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 02:31:19.827907       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 02:31:19.827908       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [c7db1a15ce4f11a84b0f31ef475bf5a22e3d53b4b96d6409708df46736767c3a] <==
	E1124 02:30:35.925026       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 02:30:35.925046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:30:35.925071       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:30:35.925174       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 02:30:35.925182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 02:30:35.925185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 02:30:35.925200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:30:36.742851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 02:30:36.795808       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:30:36.802729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 02:30:36.899855       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 02:30:36.918831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:30:36.956079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:30:36.981315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:30:37.007059       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:30:37.059358       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 02:30:37.063012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:30:37.122084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1124 02:30:40.222544       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 02:31:25.741237       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 02:31:25.741253       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 02:31:25.741480       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 02:31:25.741581       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 02:31:25.741589       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 02:31:25.741605       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dce144aaf94727e500dbae4dfa424f72246c2ed0c3fcfc5f0691ae156a87f066] <==
	E1124 02:31:32.410660       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:31:35.131288       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 02:31:35.274189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 02:31:35.574730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 02:31:35.726506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 02:31:35.739844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 02:31:36.029546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 02:31:36.278195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 02:31:36.337017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 02:31:36.358409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 02:31:36.404206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 02:31:36.413546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 02:31:36.810544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 02:31:36.840083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 02:31:36.988107       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 02:31:37.036468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 02:31:37.111446       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 02:31:37.219009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 02:31:37.273643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 02:31:37.440599       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 02:31:38.045047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 02:31:39.719933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1124 02:31:44.968434       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 02:31:47.768637       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 02:31:48.767896       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 24 02:39:30 functional-333040 kubelet[4298]: E1124 02:39:30.733931    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8rm82" podUID="89bb2271-199b-49b9-a620-48313278a456"
	Nov 24 02:39:33 functional-333040 kubelet[4298]: E1124 02:39:33.733643    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-mjsp7" podUID="f4a04e6d-bb36-4220-b0fd-298515934bf0"
	Nov 24 02:39:43 functional-333040 kubelet[4298]: E1124 02:39:43.734120    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8rm82" podUID="89bb2271-199b-49b9-a620-48313278a456"
	Nov 24 02:39:45 functional-333040 kubelet[4298]: E1124 02:39:45.733186    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-mjsp7" podUID="f4a04e6d-bb36-4220-b0fd-298515934bf0"
	Nov 24 02:39:54 functional-333040 kubelet[4298]: E1124 02:39:54.733314    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8rm82" podUID="89bb2271-199b-49b9-a620-48313278a456"
	Nov 24 02:40:00 functional-333040 kubelet[4298]: E1124 02:40:00.733418    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-mjsp7" podUID="f4a04e6d-bb36-4220-b0fd-298515934bf0"
	Nov 24 02:40:08 functional-333040 kubelet[4298]: E1124 02:40:08.733395    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8rm82" podUID="89bb2271-199b-49b9-a620-48313278a456"
	Nov 24 02:40:15 functional-333040 kubelet[4298]: E1124 02:40:15.733260    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-mjsp7" podUID="f4a04e6d-bb36-4220-b0fd-298515934bf0"
	Nov 24 02:40:23 functional-333040 kubelet[4298]: E1124 02:40:23.733728    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8rm82" podUID="89bb2271-199b-49b9-a620-48313278a456"
	Nov 24 02:40:26 functional-333040 kubelet[4298]: E1124 02:40:26.733655    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-mjsp7" podUID="f4a04e6d-bb36-4220-b0fd-298515934bf0"
	Nov 24 02:40:37 functional-333040 kubelet[4298]: E1124 02:40:37.734019    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-mjsp7" podUID="f4a04e6d-bb36-4220-b0fd-298515934bf0"
	Nov 24 02:40:37 functional-333040 kubelet[4298]: E1124 02:40:37.734174    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8rm82" podUID="89bb2271-199b-49b9-a620-48313278a456"
	Nov 24 02:40:48 functional-333040 kubelet[4298]: E1124 02:40:48.733999    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-mjsp7" podUID="f4a04e6d-bb36-4220-b0fd-298515934bf0"
	Nov 24 02:40:51 functional-333040 kubelet[4298]: E1124 02:40:51.733223    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8rm82" podUID="89bb2271-199b-49b9-a620-48313278a456"
	Nov 24 02:41:02 functional-333040 kubelet[4298]: E1124 02:41:02.733533    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-mjsp7" podUID="f4a04e6d-bb36-4220-b0fd-298515934bf0"
	Nov 24 02:41:05 functional-333040 kubelet[4298]: E1124 02:41:05.733747    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8rm82" podUID="89bb2271-199b-49b9-a620-48313278a456"
	Nov 24 02:41:14 functional-333040 kubelet[4298]: E1124 02:41:14.733625    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-mjsp7" podUID="f4a04e6d-bb36-4220-b0fd-298515934bf0"
	Nov 24 02:41:17 functional-333040 kubelet[4298]: E1124 02:41:17.735487    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8rm82" podUID="89bb2271-199b-49b9-a620-48313278a456"
	Nov 24 02:41:29 functional-333040 kubelet[4298]: E1124 02:41:29.733578    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-mjsp7" podUID="f4a04e6d-bb36-4220-b0fd-298515934bf0"
	Nov 24 02:41:30 functional-333040 kubelet[4298]: E1124 02:41:30.733691    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8rm82" podUID="89bb2271-199b-49b9-a620-48313278a456"
	Nov 24 02:41:41 functional-333040 kubelet[4298]: E1124 02:41:41.734242    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8rm82" podUID="89bb2271-199b-49b9-a620-48313278a456"
	Nov 24 02:41:42 functional-333040 kubelet[4298]: E1124 02:41:42.733659    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-mjsp7" podUID="f4a04e6d-bb36-4220-b0fd-298515934bf0"
	Nov 24 02:41:54 functional-333040 kubelet[4298]: E1124 02:41:54.733663    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8rm82" podUID="89bb2271-199b-49b9-a620-48313278a456"
	Nov 24 02:41:56 functional-333040 kubelet[4298]: E1124 02:41:56.733362    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-mjsp7" podUID="f4a04e6d-bb36-4220-b0fd-298515934bf0"
	Nov 24 02:42:08 functional-333040 kubelet[4298]: E1124 02:42:08.734321    4298 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-8rm82" podUID="89bb2271-199b-49b9-a620-48313278a456"
	
	
	==> kubernetes-dashboard [0690e3f95852ea979407b3eb2fff416b976df80fe1cc894096fc6b1f5dd8ec6c] <==
	2025/11/24 02:32:31 Using namespace: kubernetes-dashboard
	2025/11/24 02:32:31 Using in-cluster config to connect to apiserver
	2025/11/24 02:32:31 Using secret token for csrf signing
	2025/11/24 02:32:31 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 02:32:31 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 02:32:31 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 02:32:31 Generating JWE encryption key
	2025/11/24 02:32:31 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 02:32:31 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 02:32:32 Initializing JWE encryption key from synchronized object
	2025/11/24 02:32:32 Creating in-cluster Sidecar client
	2025/11/24 02:32:32 Successful request to sidecar
	2025/11/24 02:32:32 Serving insecurely on HTTP port: 9090
	2025/11/24 02:32:31 Starting overwatch
	
	
	==> storage-provisioner [0a0a27764ffeaebc92aec000b21f1283318bc78ede7ba7ebef828cd6075f82d7] <==
	W1124 02:41:45.492560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:41:47.494815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:41:47.498257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:41:49.501125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:41:49.505512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:41:51.507938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:41:51.512564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:41:53.515289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:41:53.518818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:41:55.521165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:41:55.525672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:41:57.528199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:41:57.532031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:41:59.534452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:41:59.538125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:42:01.540906       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:42:01.544464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:42:03.547105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:42:03.551364       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:42:05.553744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:42:05.557132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:42:07.559525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:42:07.563222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:42:09.566941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:42:09.570451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [84c9df79230c5d2857a8571cb76380d13ed4852a8c46a952b58479f118c1db7a] <==
	W1124 02:30:55.315772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:30:55.318944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 02:30:55.413538       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-333040_544a1efa-bf1b-422b-bc46-3bdbea717e59!
	W1124 02:30:57.322641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:30:57.326737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:30:59.330164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:30:59.333783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:01.336697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:01.340686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:03.344278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:03.349099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:05.352087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:05.355707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:07.358558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:07.362211       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:09.364788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:09.369839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:11.372251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:11.375769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:13.377815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:13.381296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:15.383944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:15.388029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:17.391028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 02:31:17.394180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-333040 -n functional-333040
helpers_test.go:269: (dbg) Run:  kubectl --context functional-333040 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-8rm82 hello-node-connect-7d85dfc575-mjsp7
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-333040 describe pod busybox-mount hello-node-75c85bcc94-8rm82 hello-node-connect-7d85dfc575-mjsp7
helpers_test.go:290: (dbg) kubectl --context functional-333040 describe pod busybox-mount hello-node-75c85bcc94-8rm82 hello-node-connect-7d85dfc575-mjsp7:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-333040/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:18 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://f023547f28649ac27d8b2a4e4c7904a6ace00536beb37e7a12b4ec2ca08c0e69
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 24 Nov 2025 02:32:19 +0000
	      Finished:     Mon, 24 Nov 2025 02:32:19 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rxf4f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-rxf4f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m52s  default-scheduler  Successfully assigned default/busybox-mount to functional-333040
	  Normal  Pulling    9m52s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m51s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 625ms (788ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m51s  kubelet            Created container: mount-munger
	  Normal  Started    9m51s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-8rm82
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-333040/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:13 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2jqr6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2jqr6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m57s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8rm82 to functional-333040
	  Normal   Pulling    6m52s (x5 over 9m57s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m52s (x5 over 9m57s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m52s (x5 over 9m57s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m50s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m50s (x21 over 9m57s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-mjsp7
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-333040/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 02:32:07 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hksh5 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hksh5:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-mjsp7 to functional-333040
	  Normal   Pulling    7m (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m (x5 over 10m)      kubelet            Error: ErrImagePull
	  Warning  Failed     4m58s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m47s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image load --daemon kicbase/echo-server:functional-333040 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-333040" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image load --daemon kicbase/echo-server:functional-333040 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-333040 image ls: (2.170359284s)
functional_test.go:461: expected "kicbase/echo-server:functional-333040" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-333040
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image load --daemon kicbase/echo-server:functional-333040 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-333040 image load --daemon kicbase/echo-server:functional-333040 --alsologtostderr: (1.485097217s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-333040" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image save kicbase/echo-server:functional-333040 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1124 02:32:06.991549  384628 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:32:06.991810  384628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:06.991820  384628 out.go:374] Setting ErrFile to fd 2...
	I1124 02:32:06.991824  384628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:06.992059  384628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:32:06.992627  384628 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:32:06.992737  384628 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:32:06.993252  384628 cli_runner.go:164] Run: docker container inspect functional-333040 --format={{.State.Status}}
	I1124 02:32:07.012666  384628 ssh_runner.go:195] Run: systemctl --version
	I1124 02:32:07.012713  384628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333040
	I1124 02:32:07.030967  384628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/functional-333040/id_rsa Username:docker}
	I1124 02:32:07.129108  384628 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1124 02:32:07.129183  384628 cache_images.go:255] Failed to load cached images for "functional-333040": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1124 02:32:07.129205  384628 cache_images.go:267] failed pushing to: functional-333040

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-333040
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image save --daemon kicbase/echo-server:functional-333040 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-333040
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-333040: exit status 1 (16.702283ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-333040

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-333040

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-333040 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-333040 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-8rm82" [89bb2271-199b-49b9-a620-48313278a456] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-333040 -n functional-333040
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-24 02:42:13.610903493 +0000 UTC m=+1092.662756853
functional_test.go:1460: (dbg) Run:  kubectl --context functional-333040 describe po hello-node-75c85bcc94-8rm82 -n default
functional_test.go:1460: (dbg) kubectl --context functional-333040 describe po hello-node-75c85bcc94-8rm82 -n default:
Name:             hello-node-75c85bcc94-8rm82
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-333040/192.168.49.2
Start Time:       Mon, 24 Nov 2025 02:32:13 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2jqr6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2jqr6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-8rm82 to functional-333040
Normal   Pulling    6m55s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m55s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m55s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-333040 logs hello-node-75c85bcc94-8rm82 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-333040 logs hello-node-75c85bcc94-8rm82 -n default: exit status 1 (58.065033ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-8rm82" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-333040 logs hello-node-75c85bcc94-8rm82 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-333040 service --namespace=default --https --url hello-node: exit status 115 (524.908725ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30940
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-333040 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-333040 service hello-node --url --format={{.IP}}: exit status 115 (520.690565ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-333040 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-333040 service hello-node --url: exit status 115 (523.952062ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30940
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-333040 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30940
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.66s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-585498 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-585498 --output=json --user=testUser: exit status 80 (1.659357527s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4964be17-9eca-4a25-bc6b-f4d26917a071","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-585498 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"7834d547-294b-4080-9c5f-4f67950beee5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-24T02:52:09Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"3aa910cf-1428-49ed-9f21-d7243e2b68bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-585498 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (1.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.73s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-585498 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-585498 --output=json --user=testUser: exit status 80 (1.73295051s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"29d163bb-b842-4d58-b2e5-15a0033abc90","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-585498 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"e6a6d8bc-6b26-460b-a591-32d8ca895e9a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-24T02:52:11Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"46a63f9e-fead-409e-b68a-7cdd581bf8fa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-585498 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.73s)

                                                
                                    
x
+
TestPause/serial/Pause (6s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-530927 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-530927 --alsologtostderr -v=5: exit status 80 (2.508400684s)

                                                
                                                
-- stdout --
	* Pausing node pause-530927 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:04:52.732580  536792 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:04:52.732684  536792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:04:52.732693  536792 out.go:374] Setting ErrFile to fd 2...
	I1124 03:04:52.732697  536792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:04:52.733358  536792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:04:52.734069  536792 out.go:368] Setting JSON to false
	I1124 03:04:52.734109  536792 mustload.go:66] Loading cluster: pause-530927
	I1124 03:04:52.734557  536792 config.go:182] Loaded profile config "pause-530927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:04:52.735041  536792 cli_runner.go:164] Run: docker container inspect pause-530927 --format={{.State.Status}}
	I1124 03:04:52.754022  536792 host.go:66] Checking if "pause-530927" exists ...
	I1124 03:04:52.754377  536792 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:04:52.822133  536792 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-24 03:04:52.809602572 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:04:52.822671  536792 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763935228-21975/minikube-v1.37.0-1763935228-21975-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763935228-21975-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-530927 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 03:04:52.824223  536792 out.go:179] * Pausing node pause-530927 ... 
	I1124 03:04:52.825336  536792 host.go:66] Checking if "pause-530927" exists ...
	I1124 03:04:52.825607  536792 ssh_runner.go:195] Run: systemctl --version
	I1124 03:04:52.825646  536792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-530927
	I1124 03:04:52.846379  536792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33353 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/pause-530927/id_rsa Username:docker}
	I1124 03:04:52.951034  536792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:04:52.965382  536792 pause.go:52] kubelet running: true
	I1124 03:04:52.965448  536792 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:04:53.138940  536792 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:04:53.139056  536792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:04:53.219297  536792 cri.go:89] found id: "e2563685bbbb787f2dcaa7031d98f881b07cff82aa3418fdff322f7ded3a781d"
	I1124 03:04:53.219324  536792 cri.go:89] found id: "13c884591858d5f3dd598f14e7d0e5092f90334981841d73ce5c724262401f23"
	I1124 03:04:53.219330  536792 cri.go:89] found id: "2a467466a32f30878109a8f26060e1d828a3a95ad82d66388beab1f18922dc66"
	I1124 03:04:53.219335  536792 cri.go:89] found id: "44717dfd0dd19f67f9d5242ec3bdfd8f3ef090f8003cf2ea62ea03eadec418a2"
	I1124 03:04:53.219339  536792 cri.go:89] found id: "6c0c28404a4898860dddc52df5afabd341a06204e739de342cedcd93491a2e47"
	I1124 03:04:53.219345  536792 cri.go:89] found id: "21c0f7b1624aac3029a7217aacc15d8049cdbf93ea484ccfbaa5ca8134d8c67b"
	I1124 03:04:53.219350  536792 cri.go:89] found id: "db979218070c8912540c318fd1e65becb327b18a39073bb2f0cb3c7e22ec95cb"
	I1124 03:04:53.219354  536792 cri.go:89] found id: ""
	I1124 03:04:53.219402  536792 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:04:53.232226  536792 retry.go:31] will retry after 160.412407ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:04:53Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:04:53.393655  536792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:04:53.408072  536792 pause.go:52] kubelet running: false
	I1124 03:04:53.408126  536792 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:04:53.530946  536792 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:04:53.531031  536792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:04:53.613179  536792 cri.go:89] found id: "e2563685bbbb787f2dcaa7031d98f881b07cff82aa3418fdff322f7ded3a781d"
	I1124 03:04:53.613207  536792 cri.go:89] found id: "13c884591858d5f3dd598f14e7d0e5092f90334981841d73ce5c724262401f23"
	I1124 03:04:53.613213  536792 cri.go:89] found id: "2a467466a32f30878109a8f26060e1d828a3a95ad82d66388beab1f18922dc66"
	I1124 03:04:53.613219  536792 cri.go:89] found id: "44717dfd0dd19f67f9d5242ec3bdfd8f3ef090f8003cf2ea62ea03eadec418a2"
	I1124 03:04:53.613222  536792 cri.go:89] found id: "6c0c28404a4898860dddc52df5afabd341a06204e739de342cedcd93491a2e47"
	I1124 03:04:53.613227  536792 cri.go:89] found id: "21c0f7b1624aac3029a7217aacc15d8049cdbf93ea484ccfbaa5ca8134d8c67b"
	I1124 03:04:53.613231  536792 cri.go:89] found id: "db979218070c8912540c318fd1e65becb327b18a39073bb2f0cb3c7e22ec95cb"
	I1124 03:04:53.613235  536792 cri.go:89] found id: ""
	I1124 03:04:53.613285  536792 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:04:53.625656  536792 retry.go:31] will retry after 472.116273ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:04:53Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:04:54.098006  536792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:04:54.111750  536792 pause.go:52] kubelet running: false
	I1124 03:04:54.111801  536792 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:04:54.230994  536792 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:04:54.231100  536792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:04:54.298129  536792 cri.go:89] found id: "e2563685bbbb787f2dcaa7031d98f881b07cff82aa3418fdff322f7ded3a781d"
	I1124 03:04:54.298151  536792 cri.go:89] found id: "13c884591858d5f3dd598f14e7d0e5092f90334981841d73ce5c724262401f23"
	I1124 03:04:54.298158  536792 cri.go:89] found id: "2a467466a32f30878109a8f26060e1d828a3a95ad82d66388beab1f18922dc66"
	I1124 03:04:54.298162  536792 cri.go:89] found id: "44717dfd0dd19f67f9d5242ec3bdfd8f3ef090f8003cf2ea62ea03eadec418a2"
	I1124 03:04:54.298166  536792 cri.go:89] found id: "6c0c28404a4898860dddc52df5afabd341a06204e739de342cedcd93491a2e47"
	I1124 03:04:54.298171  536792 cri.go:89] found id: "21c0f7b1624aac3029a7217aacc15d8049cdbf93ea484ccfbaa5ca8134d8c67b"
	I1124 03:04:54.298175  536792 cri.go:89] found id: "db979218070c8912540c318fd1e65becb327b18a39073bb2f0cb3c7e22ec95cb"
	I1124 03:04:54.298179  536792 cri.go:89] found id: ""
	I1124 03:04:54.298220  536792 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:04:54.311867  536792 retry.go:31] will retry after 621.235741ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:04:54Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:04:54.933302  536792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:04:54.947267  536792 pause.go:52] kubelet running: false
	I1124 03:04:54.947336  536792 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:04:55.083047  536792 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:04:55.083130  536792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:04:55.153279  536792 cri.go:89] found id: "e2563685bbbb787f2dcaa7031d98f881b07cff82aa3418fdff322f7ded3a781d"
	I1124 03:04:55.153299  536792 cri.go:89] found id: "13c884591858d5f3dd598f14e7d0e5092f90334981841d73ce5c724262401f23"
	I1124 03:04:55.153306  536792 cri.go:89] found id: "2a467466a32f30878109a8f26060e1d828a3a95ad82d66388beab1f18922dc66"
	I1124 03:04:55.153311  536792 cri.go:89] found id: "44717dfd0dd19f67f9d5242ec3bdfd8f3ef090f8003cf2ea62ea03eadec418a2"
	I1124 03:04:55.153316  536792 cri.go:89] found id: "6c0c28404a4898860dddc52df5afabd341a06204e739de342cedcd93491a2e47"
	I1124 03:04:55.153321  536792 cri.go:89] found id: "21c0f7b1624aac3029a7217aacc15d8049cdbf93ea484ccfbaa5ca8134d8c67b"
	I1124 03:04:55.153326  536792 cri.go:89] found id: "db979218070c8912540c318fd1e65becb327b18a39073bb2f0cb3c7e22ec95cb"
	I1124 03:04:55.153330  536792 cri.go:89] found id: ""
	I1124 03:04:55.153367  536792 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:04:55.167430  536792 out.go:203] 
	W1124 03:04:55.168511  536792 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:04:55Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:04:55Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:04:55.168532  536792 out.go:285] * 
	* 
	W1124 03:04:55.174247  536792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:04:55.175375  536792 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-530927 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-530927
helpers_test.go:243: (dbg) docker inspect pause-530927:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d817ee5c958310abd9b28f4021fa6d7a299f0a6a9182cf2162077d4689b8d6d",
	        "Created": "2025-11-24T03:04:04.657703071Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:04:04.712800156Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/0d817ee5c958310abd9b28f4021fa6d7a299f0a6a9182cf2162077d4689b8d6d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d817ee5c958310abd9b28f4021fa6d7a299f0a6a9182cf2162077d4689b8d6d/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d817ee5c958310abd9b28f4021fa6d7a299f0a6a9182cf2162077d4689b8d6d/hosts",
	        "LogPath": "/var/lib/docker/containers/0d817ee5c958310abd9b28f4021fa6d7a299f0a6a9182cf2162077d4689b8d6d/0d817ee5c958310abd9b28f4021fa6d7a299f0a6a9182cf2162077d4689b8d6d-json.log",
	        "Name": "/pause-530927",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-530927:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-530927",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d817ee5c958310abd9b28f4021fa6d7a299f0a6a9182cf2162077d4689b8d6d",
	                "LowerDir": "/var/lib/docker/overlay2/569a1925d1d3ee07e61a9f64ecaa073fdfe0036dc354dbc0cfc70c5d6329014f-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/569a1925d1d3ee07e61a9f64ecaa073fdfe0036dc354dbc0cfc70c5d6329014f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/569a1925d1d3ee07e61a9f64ecaa073fdfe0036dc354dbc0cfc70c5d6329014f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/569a1925d1d3ee07e61a9f64ecaa073fdfe0036dc354dbc0cfc70c5d6329014f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-530927",
	                "Source": "/var/lib/docker/volumes/pause-530927/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-530927",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-530927",
	                "name.minikube.sigs.k8s.io": "pause-530927",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c25e9c6244fae20c993ea79082f112ec156855501d6cd036e63d2878bb1240ce",
	            "SandboxKey": "/var/run/docker/netns/c25e9c6244fa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33353"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33354"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33357"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33356"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-530927": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9643fba55f3767c656603fb76039e52ab1853a2b57e72d597a928e3fcfc47a32",
	                    "EndpointID": "4b28d9ff5c90eb47c4734ac667319dc8e5a2ce80bc8a9ba6f5cb4a257da2be87",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "f6:e6:b1:c9:30:41",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-530927",
	                        "0d817ee5c958"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-530927 -n pause-530927
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-530927 -n pause-530927: exit status 2 (349.457163ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-530927 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-029934 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-029934       │ jenkins │ v1.37.0 │ 24 Nov 25 03:02 UTC │                     │
	│ stop    │ -p scheduled-stop-029934 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-029934       │ jenkins │ v1.37.0 │ 24 Nov 25 03:02 UTC │                     │
	│ stop    │ -p scheduled-stop-029934 --cancel-scheduled                                                                                 │ scheduled-stop-029934       │ jenkins │ v1.37.0 │ 24 Nov 25 03:02 UTC │ 24 Nov 25 03:02 UTC │
	│ stop    │ -p scheduled-stop-029934 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-029934       │ jenkins │ v1.37.0 │ 24 Nov 25 03:02 UTC │                     │
	│ stop    │ -p scheduled-stop-029934 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-029934       │ jenkins │ v1.37.0 │ 24 Nov 25 03:02 UTC │                     │
	│ stop    │ -p scheduled-stop-029934 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-029934       │ jenkins │ v1.37.0 │ 24 Nov 25 03:02 UTC │ 24 Nov 25 03:03 UTC │
	│ delete  │ -p scheduled-stop-029934                                                                                                    │ scheduled-stop-029934       │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │ 24 Nov 25 03:03 UTC │
	│ start   │ -p insufficient-storage-628185 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-628185 │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │                     │
	│ delete  │ -p insufficient-storage-628185                                                                                              │ insufficient-storage-628185 │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │ 24 Nov 25 03:03 UTC │
	│ start   │ -p pause-530927 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-530927                │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p force-systemd-env-550049 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-550049    │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p NoKubernetes-565297 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio               │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │                     │
	│ start   │ -p offline-crio-493654 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-493654         │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p NoKubernetes-565297 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                       │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │ 24 Nov 25 03:04 UTC │
	│ delete  │ -p force-systemd-env-550049                                                                                                 │ force-systemd-env-550049    │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p cert-expiration-062725 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-062725      │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p NoKubernetes-565297 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ delete  │ -p NoKubernetes-565297                                                                                                      │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ delete  │ -p offline-crio-493654                                                                                                      │ offline-crio-493654         │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p pause-530927 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-530927                │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p NoKubernetes-565297 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p force-systemd-flag-597158 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-597158   │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │                     │
	│ ssh     │ -p NoKubernetes-565297 sudo systemctl is-active --quiet service kubelet                                                     │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │                     │
	│ pause   │ -p pause-530927 --alsologtostderr -v=5                                                                                      │ pause-530927                │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │                     │
	│ stop    │ -p NoKubernetes-565297                                                                                                      │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:04:46
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:04:46.066327  533070 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:04:46.066645  533070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:04:46.066662  533070 out.go:374] Setting ErrFile to fd 2...
	I1124 03:04:46.066668  533070 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:04:46.067000  533070 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:04:46.068253  533070 out.go:368] Setting JSON to false
	I1124 03:04:46.069903  533070 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6433,"bootTime":1763947053,"procs":287,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:04:46.070000  533070 start.go:143] virtualization: kvm guest
	I1124 03:04:46.071789  533070 out.go:179] * [force-systemd-flag-597158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:04:46.073157  533070 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:04:46.073258  533070 notify.go:221] Checking for updates...
	I1124 03:04:46.075320  533070 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:04:46.076638  533070 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:04:46.078310  533070 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:04:46.079761  533070 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:04:46.080834  533070 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:04:46.082740  533070 config.go:182] Loaded profile config "NoKubernetes-565297": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1124 03:04:46.082936  533070 config.go:182] Loaded profile config "cert-expiration-062725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:04:46.083142  533070 config.go:182] Loaded profile config "pause-530927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:04:46.083270  533070 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:04:46.114662  533070 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:04:46.114901  533070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:04:46.188408  533070 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:54 OomKillDisable:false NGoroutines:68 SystemTime:2025-11-24 03:04:46.176772425 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:04:46.188540  533070 docker.go:319] overlay module found
	I1124 03:04:46.189943  533070 out.go:179] * Using the docker driver based on user configuration
	I1124 03:04:46.191173  533070 start.go:309] selected driver: docker
	I1124 03:04:46.191189  533070 start.go:927] validating driver "docker" against <nil>
	I1124 03:04:46.191212  533070 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:04:46.191897  533070 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:04:46.255803  533070 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:80 SystemTime:2025-11-24 03:04:46.244793895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:04:46.256046  533070 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:04:46.256342  533070 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 03:04:46.258056  533070 out.go:179] * Using Docker driver with root privileges
	I1124 03:04:46.259151  533070 cni.go:84] Creating CNI manager for ""
	I1124 03:04:46.259230  533070 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:04:46.259245  533070 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:04:46.259330  533070 start.go:353] cluster config:
	{Name:force-systemd-flag-597158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-597158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:04:46.260516  533070 out.go:179] * Starting "force-systemd-flag-597158" primary control-plane node in "force-systemd-flag-597158" cluster
	I1124 03:04:46.261476  533070 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:04:46.262474  533070 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:04:46.265161  533070 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:04:46.265193  533070 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:04:46.265202  533070 cache.go:65] Caching tarball of preloaded images
	I1124 03:04:46.265240  533070 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:04:46.265304  533070 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:04:46.265316  533070 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:04:46.265413  533070 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/config.json ...
	I1124 03:04:46.265434  533070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/config.json: {Name:mk6b70835fc1a65edeb43b93af2d82d88822e470 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:04:46.287590  533070 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:04:46.287617  533070 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:04:46.287636  533070 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:04:46.287667  533070 start.go:360] acquireMachinesLock for force-systemd-flag-597158: {Name:mkb569561e245ef10712300b9f75be3abb4a3129 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:04:46.287764  533070 start.go:364] duration metric: took 75.315µs to acquireMachinesLock for "force-systemd-flag-597158"
	I1124 03:04:46.287796  533070 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-597158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-597158 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:04:46.287875  533070 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:04:46.089222  527382 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:04:46.667365  527382 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:04:46.668659  527382 kubeadm.go:319] 
	I1124 03:04:46.668740  527382 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:04:46.668745  527382 kubeadm.go:319] 
	I1124 03:04:46.668856  527382 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:04:46.668861  527382 kubeadm.go:319] 
	I1124 03:04:46.668935  527382 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:04:46.669022  527382 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:04:46.669082  527382 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:04:46.669085  527382 kubeadm.go:319] 
	I1124 03:04:46.669164  527382 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:04:46.669168  527382 kubeadm.go:319] 
	I1124 03:04:46.669260  527382 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:04:46.669277  527382 kubeadm.go:319] 
	I1124 03:04:46.669354  527382 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:04:46.669456  527382 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:04:46.669552  527382 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:04:46.669556  527382 kubeadm.go:319] 
	I1124 03:04:46.669661  527382 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:04:46.669752  527382 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:04:46.669756  527382 kubeadm.go:319] 
	I1124 03:04:46.669860  527382 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 9p3hiy.4fb1w1cqv1eedbqr \
	I1124 03:04:46.670025  527382 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:04:46.670055  527382 kubeadm.go:319] 	--control-plane 
	I1124 03:04:46.670058  527382 kubeadm.go:319] 
	I1124 03:04:46.670157  527382 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:04:46.670160  527382 kubeadm.go:319] 
	I1124 03:04:46.670251  527382 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 9p3hiy.4fb1w1cqv1eedbqr \
	I1124 03:04:46.670372  527382 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:04:46.673241  527382 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:04:46.673388  527382 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:04:46.673414  527382 cni.go:84] Creating CNI manager for ""
	I1124 03:04:46.673421  527382 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:04:46.675375  527382 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:04:46.676668  527382 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:04:46.681468  527382 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:04:46.681477  527382 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:04:46.698210  527382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:04:47.047129  527382 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:04:47.047291  527382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:04:47.047370  527382 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-062725 minikube.k8s.io/updated_at=2025_11_24T03_04_47_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=cert-expiration-062725 minikube.k8s.io/primary=true
	I1124 03:04:47.157236  527382 ops.go:34] apiserver oom_adj: -16
	I1124 03:04:47.160842  527382 kubeadm.go:1114] duration metric: took 113.606445ms to wait for elevateKubeSystemPrivileges
	I1124 03:04:47.160864  527382 kubeadm.go:403] duration metric: took 9.835642501s to StartCluster
	I1124 03:04:47.160898  527382 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:04:47.160976  527382 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:04:47.162283  527382 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:04:47.162510  527382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:04:47.162518  527382 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:04:47.162576  527382 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:04:47.162693  527382 addons.go:70] Setting storage-provisioner=true in profile "cert-expiration-062725"
	I1124 03:04:47.162729  527382 addons.go:239] Setting addon storage-provisioner=true in "cert-expiration-062725"
	I1124 03:04:47.162741  527382 config.go:182] Loaded profile config "cert-expiration-062725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:04:47.162759  527382 host.go:66] Checking if "cert-expiration-062725" exists ...
	I1124 03:04:47.162772  527382 addons.go:70] Setting default-storageclass=true in profile "cert-expiration-062725"
	I1124 03:04:47.162795  527382 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-062725"
	I1124 03:04:47.163193  527382 cli_runner.go:164] Run: docker container inspect cert-expiration-062725 --format={{.State.Status}}
	I1124 03:04:47.163393  527382 cli_runner.go:164] Run: docker container inspect cert-expiration-062725 --format={{.State.Status}}
	I1124 03:04:47.164142  527382 out.go:179] * Verifying Kubernetes components...
	I1124 03:04:47.165165  527382 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:04:47.190824  527382 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:04:47.191994  527382 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:04:47.192007  527382 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:04:47.192072  527382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-062725
	I1124 03:04:47.194042  527382 addons.go:239] Setting addon default-storageclass=true in "cert-expiration-062725"
	I1124 03:04:47.194078  527382 host.go:66] Checking if "cert-expiration-062725" exists ...
	I1124 03:04:47.194588  527382 cli_runner.go:164] Run: docker container inspect cert-expiration-062725 --format={{.State.Status}}
	I1124 03:04:47.225494  527382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/cert-expiration-062725/id_rsa Username:docker}
	I1124 03:04:47.233645  527382 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:04:47.233659  527382 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:04:47.233712  527382 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cert-expiration-062725
	I1124 03:04:47.267671  527382 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33363 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/cert-expiration-062725/id_rsa Username:docker}
	I1124 03:04:47.288429  527382 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:04:47.369508  527382 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:04:47.370526  527382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:04:47.412076  527382 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:04:47.522670  527382 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 03:04:47.523898  527382 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:04:47.523953  527382 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:04:47.707195  527382 api_server.go:72] duration metric: took 544.647503ms to wait for apiserver process to appear ...
	I1124 03:04:47.707213  527382 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:04:47.707236  527382 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:04:47.712798  527382 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:04:47.713707  527382 api_server.go:141] control plane version: v1.34.1
	I1124 03:04:47.713735  527382 api_server.go:131] duration metric: took 6.513915ms to wait for apiserver health ...
	I1124 03:04:47.713745  527382 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:04:47.713961  527382 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:04:47.714987  527382 addons.go:530] duration metric: took 552.403737ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:04:47.716373  527382 system_pods.go:59] 5 kube-system pods found
	I1124 03:04:47.716427  527382 system_pods.go:61] "etcd-cert-expiration-062725" [fcecd768-f0b0-4293-bd9a-16a4bd4c2169] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:04:47.716442  527382 system_pods.go:61] "kube-apiserver-cert-expiration-062725" [6a3271f0-6100-4c57-9ffa-2019b384b52e] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:04:47.716454  527382 system_pods.go:61] "kube-controller-manager-cert-expiration-062725" [4313d539-6d50-4393-844d-6d84524c4566] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:04:47.716461  527382 system_pods.go:61] "kube-scheduler-cert-expiration-062725" [5408b780-1cf8-4fa8-a457-ea36fc4a8e64] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:04:47.716466  527382 system_pods.go:61] "storage-provisioner" [9faba197-1147-4b57-94bf-b99eb583a306] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:04:47.716472  527382 system_pods.go:74] duration metric: took 2.722168ms to wait for pod list to return data ...
	I1124 03:04:47.716482  527382 kubeadm.go:587] duration metric: took 553.940955ms to wait for: map[apiserver:true system_pods:true]
	I1124 03:04:47.716494  527382 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:04:47.719056  527382 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:04:47.719073  527382 node_conditions.go:123] node cpu capacity is 8
	I1124 03:04:47.719089  527382 node_conditions.go:105] duration metric: took 2.591366ms to run NodePressure ...
	I1124 03:04:47.719104  527382 start.go:242] waiting for startup goroutines ...
	I1124 03:04:48.026768  527382 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-062725" context rescaled to 1 replicas
	I1124 03:04:48.026803  527382 start.go:247] waiting for cluster config update ...
	I1124 03:04:48.026817  527382 start.go:256] writing updated cluster config ...
	I1124 03:04:48.064943  527382 ssh_runner.go:195] Run: rm -f paused
	I1124 03:04:48.118864  527382 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:04:48.174979  527382 out.go:179] * Done! kubectl is now configured to use "cert-expiration-062725" cluster and "default" namespace by default
	I1124 03:04:44.777014  532081 out.go:252] * Updating the running docker "pause-530927" container ...
	I1124 03:04:44.777049  532081 machine.go:94] provisionDockerMachine start ...
	I1124 03:04:44.777131  532081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-530927
	I1124 03:04:44.798331  532081 main.go:143] libmachine: Using SSH client type: native
	I1124 03:04:44.798683  532081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33353 <nil> <nil>}
	I1124 03:04:44.798705  532081 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:04:44.963521  532081 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-530927
	
	I1124 03:04:44.963572  532081 ubuntu.go:182] provisioning hostname "pause-530927"
	I1124 03:04:44.963647  532081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-530927
	I1124 03:04:44.988707  532081 main.go:143] libmachine: Using SSH client type: native
	I1124 03:04:44.989082  532081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33353 <nil> <nil>}
	I1124 03:04:44.989118  532081 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-530927 && echo "pause-530927" | sudo tee /etc/hostname
	I1124 03:04:45.155607  532081 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-530927
	
	I1124 03:04:45.155688  532081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-530927
	I1124 03:04:45.179822  532081 main.go:143] libmachine: Using SSH client type: native
	I1124 03:04:45.180161  532081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33353 <nil> <nil>}
	I1124 03:04:45.180191  532081 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-530927' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-530927/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-530927' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:04:45.333738  532081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:04:45.333763  532081 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:04:45.333804  532081 ubuntu.go:190] setting up certificates
	I1124 03:04:45.333825  532081 provision.go:84] configureAuth start
	I1124 03:04:45.333873  532081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-530927
	I1124 03:04:45.352341  532081 provision.go:143] copyHostCerts
	I1124 03:04:45.352388  532081 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:04:45.352405  532081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:04:45.352467  532081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:04:45.352586  532081 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:04:45.352596  532081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:04:45.352625  532081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:04:45.352697  532081 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:04:45.352705  532081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:04:45.352728  532081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:04:45.352787  532081 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.pause-530927 san=[127.0.0.1 192.168.85.2 localhost minikube pause-530927]
	I1124 03:04:45.530311  532081 provision.go:177] copyRemoteCerts
	I1124 03:04:45.530368  532081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:04:45.530403  532081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-530927
	I1124 03:04:45.549742  532081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33353 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/pause-530927/id_rsa Username:docker}
	I1124 03:04:45.650503  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:04:45.668782  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 03:04:45.686366  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:04:45.710825  532081 provision.go:87] duration metric: took 376.981246ms to configureAuth
	I1124 03:04:45.711004  532081 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:04:45.711261  532081 config.go:182] Loaded profile config "pause-530927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:04:45.711383  532081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-530927
	I1124 03:04:45.732343  532081 main.go:143] libmachine: Using SSH client type: native
	I1124 03:04:45.732554  532081 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33353 <nil> <nil>}
	I1124 03:04:45.732571  532081 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:04:46.168123  532081 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:04:46.168163  532081 machine.go:97] duration metric: took 1.39109723s to provisionDockerMachine
	I1124 03:04:46.168178  532081 start.go:293] postStartSetup for "pause-530927" (driver="docker")
	I1124 03:04:46.168191  532081 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:04:46.168249  532081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:04:46.168298  532081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-530927
	I1124 03:04:46.190109  532081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33353 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/pause-530927/id_rsa Username:docker}
	I1124 03:04:46.295963  532081 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:04:46.299535  532081 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:04:46.299567  532081 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:04:46.299578  532081 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:04:46.299626  532081 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:04:46.299719  532081 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:04:46.299841  532081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:04:46.308171  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:04:46.333445  532081 start.go:296] duration metric: took 165.248721ms for postStartSetup
	I1124 03:04:46.333599  532081 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:04:46.333682  532081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-530927
	I1124 03:04:46.355395  532081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33353 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/pause-530927/id_rsa Username:docker}
	I1124 03:04:46.455976  532081 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:04:46.464225  532081 fix.go:56] duration metric: took 1.711939913s for fixHost
	I1124 03:04:46.464255  532081 start.go:83] releasing machines lock for "pause-530927", held for 1.712023565s
	I1124 03:04:46.464332  532081 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-530927
	I1124 03:04:46.487676  532081 ssh_runner.go:195] Run: cat /version.json
	I1124 03:04:46.487734  532081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:04:46.487737  532081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-530927
	I1124 03:04:46.487813  532081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-530927
	I1124 03:04:46.510126  532081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33353 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/pause-530927/id_rsa Username:docker}
	I1124 03:04:46.510609  532081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33353 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/pause-530927/id_rsa Username:docker}
	I1124 03:04:46.707761  532081 ssh_runner.go:195] Run: systemctl --version
	I1124 03:04:46.717387  532081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:04:46.777741  532081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:04:46.784530  532081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:04:46.784600  532081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:04:46.795681  532081 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:04:46.795728  532081 start.go:496] detecting cgroup driver to use...
	I1124 03:04:46.795761  532081 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:04:46.795807  532081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:04:46.811552  532081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:04:46.826739  532081 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:04:46.826794  532081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:04:46.852820  532081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:04:46.882688  532081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:04:47.061448  532081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:04:47.232147  532081 docker.go:234] disabling docker service ...
	I1124 03:04:47.232220  532081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:04:47.258804  532081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:04:47.277330  532081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:04:47.455336  532081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:04:47.601716  532081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:04:47.616182  532081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:04:47.632916  532081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:04:47.632985  532081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:47.642932  532081 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:04:47.643001  532081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:47.652546  532081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:47.662279  532081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:47.672950  532081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:04:47.683865  532081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:47.695363  532081 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:47.706161  532081 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:47.717844  532081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:04:47.725726  532081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:04:47.733959  532081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:04:47.849550  532081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:04:49.511363  532081 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.661774748s)
	I1124 03:04:49.511398  532081 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:04:49.511439  532081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:04:49.515918  532081 start.go:564] Will wait 60s for crictl version
	I1124 03:04:49.515997  532081 ssh_runner.go:195] Run: which crictl
	I1124 03:04:49.519702  532081 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:04:49.546823  532081 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:04:49.546930  532081 ssh_runner.go:195] Run: crio --version
	I1124 03:04:49.576028  532081 ssh_runner.go:195] Run: crio --version
	I1124 03:04:49.610231  532081 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:04:45.255431  532332 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:04:45.255637  532332 start.go:159] libmachine.API.Create for "NoKubernetes-565297" (driver="docker")
	I1124 03:04:45.255671  532332 client.go:173] LocalClient.Create starting
	I1124 03:04:45.255739  532332 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 03:04:45.255775  532332 main.go:143] libmachine: Decoding PEM data...
	I1124 03:04:45.255800  532332 main.go:143] libmachine: Parsing certificate...
	I1124 03:04:45.255877  532332 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 03:04:45.255931  532332 main.go:143] libmachine: Decoding PEM data...
	I1124 03:04:45.255952  532332 main.go:143] libmachine: Parsing certificate...
	I1124 03:04:45.256322  532332 cli_runner.go:164] Run: docker network inspect NoKubernetes-565297 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:04:45.275481  532332 cli_runner.go:211] docker network inspect NoKubernetes-565297 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:04:45.275548  532332 network_create.go:284] running [docker network inspect NoKubernetes-565297] to gather additional debugging logs...
	I1124 03:04:45.275577  532332 cli_runner.go:164] Run: docker network inspect NoKubernetes-565297
	W1124 03:04:45.296706  532332 cli_runner.go:211] docker network inspect NoKubernetes-565297 returned with exit code 1
	I1124 03:04:45.296754  532332 network_create.go:287] error running [docker network inspect NoKubernetes-565297]: docker network inspect NoKubernetes-565297: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network NoKubernetes-565297 not found
	I1124 03:04:45.296775  532332 network_create.go:289] output of [docker network inspect NoKubernetes-565297]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network NoKubernetes-565297 not found
	
	** /stderr **
	I1124 03:04:45.296962  532332 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:04:45.317141  532332 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-58688175ab6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:9d:6a:fc:5c:13} reservation:<nil>}
	I1124 03:04:45.318067  532332 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf033568456f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:35:42:16:23:28} reservation:<nil>}
	I1124 03:04:45.318614  532332 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ecb12099844 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ce:07:19:c6:91:6e} reservation:<nil>}
	I1124 03:04:45.319260  532332 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-9399a8ec12aa IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:52:9a:71:f0:97:a5} reservation:<nil>}
	I1124 03:04:45.320039  532332 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-9643fba55f37 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:b2:f5:f4:85:5b:7d} reservation:<nil>}
	I1124 03:04:45.320870  532332 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-6e90e404eadb IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:9a:85:bf:b4:d2:e7} reservation:<nil>}
	I1124 03:04:45.321797  532332 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ef1f30}
	I1124 03:04:45.321823  532332 network_create.go:124] attempt to create docker network NoKubernetes-565297 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 03:04:45.321956  532332 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=NoKubernetes-565297 NoKubernetes-565297
	I1124 03:04:45.399819  532332 network_create.go:108] docker network NoKubernetes-565297 192.168.103.0/24 created
	I1124 03:04:45.399858  532332 kic.go:121] calculated static IP "192.168.103.2" for the "NoKubernetes-565297" container
	I1124 03:04:45.399972  532332 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:04:45.419624  532332 cli_runner.go:164] Run: docker volume create NoKubernetes-565297 --label name.minikube.sigs.k8s.io=NoKubernetes-565297 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:04:45.439841  532332 oci.go:103] Successfully created a docker volume NoKubernetes-565297
	I1124 03:04:45.439922  532332 cli_runner.go:164] Run: docker run --rm --name NoKubernetes-565297-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-565297 --entrypoint /usr/bin/test -v NoKubernetes-565297:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:04:46.083569  532332 oci.go:107] Successfully prepared a docker volume NoKubernetes-565297
	I1124 03:04:46.083770  532332 preload.go:178] Skipping preload logic due to --no-kubernetes flag
	W1124 03:04:46.083955  532332 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:04:46.084057  532332 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:04:46.084132  532332 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:04:46.161655  532332 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname NoKubernetes-565297 --name NoKubernetes-565297 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=NoKubernetes-565297 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=NoKubernetes-565297 --network NoKubernetes-565297 --ip 192.168.103.2 --volume NoKubernetes-565297:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:04:46.478005  532332 cli_runner.go:164] Run: docker container inspect NoKubernetes-565297 --format={{.State.Running}}
	I1124 03:04:46.500469  532332 cli_runner.go:164] Run: docker container inspect NoKubernetes-565297 --format={{.State.Status}}
	I1124 03:04:46.524524  532332 cli_runner.go:164] Run: docker exec NoKubernetes-565297 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:04:46.573234  532332 oci.go:144] the created container "NoKubernetes-565297" has a running status.
	I1124 03:04:46.573264  532332 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/NoKubernetes-565297/id_rsa...
	I1124 03:04:46.626531  532332 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/NoKubernetes-565297/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1124 03:04:46.626590  532332 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/NoKubernetes-565297/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:04:46.659526  532332 cli_runner.go:164] Run: docker container inspect NoKubernetes-565297 --format={{.State.Status}}
	I1124 03:04:46.679955  532332 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:04:46.679981  532332 kic_runner.go:114] Args: [docker exec --privileged NoKubernetes-565297 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:04:46.727607  532332 cli_runner.go:164] Run: docker container inspect NoKubernetes-565297 --format={{.State.Status}}
	I1124 03:04:46.762838  532332 machine.go:94] provisionDockerMachine start ...
	I1124 03:04:46.762954  532332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-565297
	I1124 03:04:46.787152  532332 main.go:143] libmachine: Using SSH client type: native
	I1124 03:04:46.787603  532332 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33368 <nil> <nil>}
	I1124 03:04:46.787654  532332 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:04:46.789227  532332 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59630->127.0.0.1:33368: read: connection reset by peer
	I1124 03:04:49.937533  532332 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-565297
	
	I1124 03:04:49.937568  532332 ubuntu.go:182] provisioning hostname "NoKubernetes-565297"
	I1124 03:04:49.937638  532332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-565297
	I1124 03:04:49.956237  532332 main.go:143] libmachine: Using SSH client type: native
	I1124 03:04:49.956456  532332 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33368 <nil> <nil>}
	I1124 03:04:49.956468  532332 main.go:143] libmachine: About to run SSH command:
	sudo hostname NoKubernetes-565297 && echo "NoKubernetes-565297" | sudo tee /etc/hostname
	I1124 03:04:49.611600  532081 cli_runner.go:164] Run: docker network inspect pause-530927 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:04:49.630864  532081 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 03:04:49.635810  532081 kubeadm.go:884] updating cluster {Name:pause-530927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-530927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:04:49.636044  532081 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:04:49.636119  532081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:04:49.670780  532081 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:04:49.670802  532081 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:04:49.670845  532081 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:04:49.708139  532081 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:04:49.708163  532081 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:04:49.708171  532081 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 03:04:49.708274  532081 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-530927 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-530927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:04:49.708341  532081 ssh_runner.go:195] Run: crio config
	I1124 03:04:49.760221  532081 cni.go:84] Creating CNI manager for ""
	I1124 03:04:49.760252  532081 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:04:49.760273  532081 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:04:49.760301  532081 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-530927 NodeName:pause-530927 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:04:49.760444  532081 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-530927"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:04:49.760539  532081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:04:49.768869  532081 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:04:49.768941  532081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:04:49.776557  532081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1124 03:04:49.789473  532081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:04:49.803150  532081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1124 03:04:49.815659  532081 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:04:49.819265  532081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:04:49.947582  532081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:04:49.960941  532081 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/pause-530927 for IP: 192.168.85.2
	I1124 03:04:49.960961  532081 certs.go:195] generating shared ca certs ...
	I1124 03:04:49.960978  532081 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:04:49.961103  532081 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:04:49.961146  532081 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:04:49.961156  532081 certs.go:257] generating profile certs ...
	I1124 03:04:49.961243  532081 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/pause-530927/client.key
	I1124 03:04:49.961314  532081 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/pause-530927/apiserver.key.11d45752
	I1124 03:04:49.961399  532081 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/pause-530927/proxy-client.key
	I1124 03:04:49.961509  532081 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:04:49.961538  532081 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:04:49.961548  532081 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:04:49.961571  532081 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:04:49.961594  532081 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:04:49.961617  532081 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:04:49.961655  532081 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:04:49.962271  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:04:49.980572  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:04:49.998475  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:04:50.016362  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:04:50.033283  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/pause-530927/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 03:04:50.050925  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/pause-530927/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:04:50.135766  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/pause-530927/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:04:50.155337  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/pause-530927/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:04:50.173935  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:04:50.191438  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:04:50.208607  532081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:04:50.226163  532081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:04:50.238454  532081 ssh_runner.go:195] Run: openssl version
	I1124 03:04:50.244332  532081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:04:50.252319  532081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:04:50.255937  532081 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:04:50.255979  532081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:04:50.291071  532081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:04:50.298793  532081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:04:50.308380  532081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:04:50.311829  532081 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:04:50.311877  532081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:04:50.350484  532081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:04:50.358375  532081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:04:50.366327  532081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:04:50.369788  532081 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:04:50.369832  532081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:04:50.403586  532081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:04:50.412837  532081 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:04:50.416351  532081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:04:50.454231  532081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:04:50.500390  532081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:04:50.539230  532081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:04:50.585074  532081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:04:50.626491  532081 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:04:50.671843  532081 kubeadm.go:401] StartCluster: {Name:pause-530927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-530927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:04:50.672010  532081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:04:50.672077  532081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:04:50.705116  532081 cri.go:89] found id: "e2563685bbbb787f2dcaa7031d98f881b07cff82aa3418fdff322f7ded3a781d"
	I1124 03:04:50.705138  532081 cri.go:89] found id: "13c884591858d5f3dd598f14e7d0e5092f90334981841d73ce5c724262401f23"
	I1124 03:04:50.705143  532081 cri.go:89] found id: "2a467466a32f30878109a8f26060e1d828a3a95ad82d66388beab1f18922dc66"
	I1124 03:04:50.705148  532081 cri.go:89] found id: "44717dfd0dd19f67f9d5242ec3bdfd8f3ef090f8003cf2ea62ea03eadec418a2"
	I1124 03:04:50.705152  532081 cri.go:89] found id: "6c0c28404a4898860dddc52df5afabd341a06204e739de342cedcd93491a2e47"
	I1124 03:04:50.705156  532081 cri.go:89] found id: "21c0f7b1624aac3029a7217aacc15d8049cdbf93ea484ccfbaa5ca8134d8c67b"
	I1124 03:04:50.705159  532081 cri.go:89] found id: "db979218070c8912540c318fd1e65becb327b18a39073bb2f0cb3c7e22ec95cb"
	I1124 03:04:50.705167  532081 cri.go:89] found id: ""
	I1124 03:04:50.705222  532081 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:04:50.720430  532081 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:04:50Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:04:50.720521  532081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:04:50.729402  532081 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:04:50.729422  532081 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:04:50.729472  532081 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:04:50.737702  532081 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:04:50.738514  532081 kubeconfig.go:125] found "pause-530927" server: "https://192.168.85.2:8443"
	I1124 03:04:50.739421  532081 kapi.go:59] client config for pause-530927: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21975-345525/.minikube/profiles/pause-530927/client.crt", KeyFile:"/home/jenkins/minikube-integration/21975-345525/.minikube/profiles/pause-530927/client.key", CAFile:"/home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 03:04:50.739939  532081 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1124 03:04:50.739966  532081 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1124 03:04:50.739972  532081 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1124 03:04:50.739978  532081 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1124 03:04:50.739983  532081 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1124 03:04:50.740495  532081 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:04:50.748256  532081 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 03:04:50.748285  532081 kubeadm.go:602] duration metric: took 18.856011ms to restartPrimaryControlPlane
	I1124 03:04:50.748295  532081 kubeadm.go:403] duration metric: took 76.461446ms to StartCluster
	I1124 03:04:50.748314  532081 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:04:50.748380  532081 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:04:50.749309  532081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:04:50.749536  532081 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:04:50.749646  532081 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:04:50.749788  532081 config.go:182] Loaded profile config "pause-530927": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:04:50.751148  532081 out.go:179] * Verifying Kubernetes components...
	I1124 03:04:50.751815  532081 out.go:179] * Enabled addons: 
	I1124 03:04:46.292781  533070 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:04:46.293060  533070 start.go:159] libmachine.API.Create for "force-systemd-flag-597158" (driver="docker")
	I1124 03:04:46.293096  533070 client.go:173] LocalClient.Create starting
	I1124 03:04:46.293168  533070 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 03:04:46.293216  533070 main.go:143] libmachine: Decoding PEM data...
	I1124 03:04:46.293245  533070 main.go:143] libmachine: Parsing certificate...
	I1124 03:04:46.293322  533070 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 03:04:46.293349  533070 main.go:143] libmachine: Decoding PEM data...
	I1124 03:04:46.293369  533070 main.go:143] libmachine: Parsing certificate...
	I1124 03:04:46.293708  533070 cli_runner.go:164] Run: docker network inspect force-systemd-flag-597158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:04:46.310854  533070 cli_runner.go:211] docker network inspect force-systemd-flag-597158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:04:46.310927  533070 network_create.go:284] running [docker network inspect force-systemd-flag-597158] to gather additional debugging logs...
	I1124 03:04:46.310952  533070 cli_runner.go:164] Run: docker network inspect force-systemd-flag-597158
	W1124 03:04:46.334582  533070 cli_runner.go:211] docker network inspect force-systemd-flag-597158 returned with exit code 1
	I1124 03:04:46.334610  533070 network_create.go:287] error running [docker network inspect force-systemd-flag-597158]: docker network inspect force-systemd-flag-597158: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-597158 not found
	I1124 03:04:46.334627  533070 network_create.go:289] output of [docker network inspect force-systemd-flag-597158]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-597158 not found
	
	** /stderr **
	I1124 03:04:46.334760  533070 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:04:46.354652  533070 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-58688175ab6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:9d:6a:fc:5c:13} reservation:<nil>}
	I1124 03:04:46.355783  533070 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf033568456f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:35:42:16:23:28} reservation:<nil>}
	I1124 03:04:46.356474  533070 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ecb12099844 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ce:07:19:c6:91:6e} reservation:<nil>}
	I1124 03:04:46.357453  533070 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001edae00}
	I1124 03:04:46.357494  533070 network_create.go:124] attempt to create docker network force-systemd-flag-597158 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 03:04:46.357548  533070 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-597158 force-systemd-flag-597158
	I1124 03:04:46.496156  533070 network_create.go:108] docker network force-systemd-flag-597158 192.168.76.0/24 created
	I1124 03:04:46.496200  533070 kic.go:121] calculated static IP "192.168.76.2" for the "force-systemd-flag-597158" container
	I1124 03:04:46.496299  533070 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:04:46.518865  533070 cli_runner.go:164] Run: docker volume create force-systemd-flag-597158 --label name.minikube.sigs.k8s.io=force-systemd-flag-597158 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:04:46.540906  533070 oci.go:103] Successfully created a docker volume force-systemd-flag-597158
	I1124 03:04:46.541005  533070 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-597158-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-597158 --entrypoint /usr/bin/test -v force-systemd-flag-597158:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:04:47.053003  533070 oci.go:107] Successfully prepared a docker volume force-systemd-flag-597158
	I1124 03:04:47.053197  533070 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:04:47.053233  533070 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:04:47.053319  533070 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-597158:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:04:50.445476  533070 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-597158:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (3.392113787s)
	I1124 03:04:50.445506  533070 kic.go:203] duration metric: took 3.39226864s to extract preloaded images to volume ...
	W1124 03:04:50.445585  533070 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:04:50.445632  533070 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:04:50.445677  533070 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:04:50.506617  533070 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-597158 --name force-systemd-flag-597158 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-597158 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-597158 --network force-systemd-flag-597158 --ip 192.168.76.2 --volume force-systemd-flag-597158:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:04:50.840933  533070 cli_runner.go:164] Run: docker container inspect force-systemd-flag-597158 --format={{.State.Running}}
	I1124 03:04:50.872047  533070 cli_runner.go:164] Run: docker container inspect force-systemd-flag-597158 --format={{.State.Status}}
	I1124 03:04:50.895065  533070 cli_runner.go:164] Run: docker exec force-systemd-flag-597158 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:04:50.948050  533070 oci.go:144] the created container "force-systemd-flag-597158" has a running status.
	I1124 03:04:50.948093  533070 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/force-systemd-flag-597158/id_rsa...
	I1124 03:04:50.143877  532332 main.go:143] libmachine: SSH cmd err, output: <nil>: NoKubernetes-565297
	
	I1124 03:04:50.143984  532332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-565297
	I1124 03:04:50.164866  532332 main.go:143] libmachine: Using SSH client type: native
	I1124 03:04:50.165141  532332 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33368 <nil> <nil>}
	I1124 03:04:50.165170  532332 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sNoKubernetes-565297' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 NoKubernetes-565297/g' /etc/hosts;
				else 
					echo '127.0.1.1 NoKubernetes-565297' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:04:50.304034  532332 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:04:50.304063  532332 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:04:50.304088  532332 ubuntu.go:190] setting up certificates
	I1124 03:04:50.304117  532332 provision.go:84] configureAuth start
	I1124 03:04:50.304170  532332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-565297
	I1124 03:04:50.323405  532332 provision.go:143] copyHostCerts
	I1124 03:04:50.323443  532332 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:04:50.323480  532332 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:04:50.323492  532332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:04:50.323562  532332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:04:50.323665  532332 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:04:50.323692  532332 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:04:50.323700  532332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:04:50.323746  532332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:04:50.323830  532332 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:04:50.323854  532332 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:04:50.323861  532332 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:04:50.323936  532332 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:04:50.324035  532332 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.NoKubernetes-565297 san=[127.0.0.1 192.168.103.2 NoKubernetes-565297 localhost minikube]
	I1124 03:04:50.449530  532332 provision.go:177] copyRemoteCerts
	I1124 03:04:50.449578  532332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:04:50.449621  532332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-565297
	I1124 03:04:50.470787  532332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33368 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/NoKubernetes-565297/id_rsa Username:docker}
	I1124 03:04:50.576635  532332 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1124 03:04:50.576696  532332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:04:50.600111  532332 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1124 03:04:50.600173  532332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:04:50.618413  532332 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1124 03:04:50.618467  532332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 03:04:50.635594  532332 provision.go:87] duration metric: took 331.4648ms to configureAuth
	I1124 03:04:50.635618  532332 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:04:50.635791  532332 config.go:182] Loaded profile config "NoKubernetes-565297": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1124 03:04:50.635944  532332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-565297
	I1124 03:04:50.654733  532332 main.go:143] libmachine: Using SSH client type: native
	I1124 03:04:50.655049  532332 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33368 <nil> <nil>}
	I1124 03:04:50.655080  532332 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:04:50.986008  532332 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:04:50.986033  532332 machine.go:97] duration metric: took 4.223170461s to provisionDockerMachine
	I1124 03:04:50.986044  532332 client.go:176] duration metric: took 5.730366252s to LocalClient.Create
	I1124 03:04:50.986067  532332 start.go:167] duration metric: took 5.730429719s to libmachine.API.Create "NoKubernetes-565297"
	I1124 03:04:50.986079  532332 start.go:293] postStartSetup for "NoKubernetes-565297" (driver="docker")
	I1124 03:04:50.986089  532332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:04:50.986156  532332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:04:50.986202  532332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-565297
	I1124 03:04:51.006585  532332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33368 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/NoKubernetes-565297/id_rsa Username:docker}
	I1124 03:04:51.115675  532332 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:04:51.120149  532332 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:04:51.120178  532332 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:04:51.120191  532332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:04:51.120248  532332 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:04:51.120340  532332 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:04:51.120353  532332 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> /etc/ssl/certs/3490782.pem
	I1124 03:04:51.120443  532332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:04:51.128603  532332 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:04:51.150587  532332 start.go:296] duration metric: took 164.491396ms for postStartSetup
	I1124 03:04:51.151558  532332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-565297
	I1124 03:04:51.184090  532332 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/NoKubernetes-565297/config.json ...
	I1124 03:04:51.184668  532332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:04:51.184748  532332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-565297
	I1124 03:04:51.208681  532332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33368 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/NoKubernetes-565297/id_rsa Username:docker}
	I1124 03:04:51.311672  532332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:04:51.316580  532332 start.go:128] duration metric: took 6.062574583s to createHost
	I1124 03:04:51.316603  532332 start.go:83] releasing machines lock for "NoKubernetes-565297", held for 6.062739764s
	I1124 03:04:51.316669  532332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" NoKubernetes-565297
	I1124 03:04:51.335740  532332 ssh_runner.go:195] Run: cat /version.json
	I1124 03:04:51.335783  532332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-565297
	I1124 03:04:51.335816  532332 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:04:51.335911  532332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" NoKubernetes-565297
	I1124 03:04:51.355511  532332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33368 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/NoKubernetes-565297/id_rsa Username:docker}
	I1124 03:04:51.355732  532332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33368 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/NoKubernetes-565297/id_rsa Username:docker}
	I1124 03:04:51.453400  532332 ssh_runner.go:195] Run: systemctl --version
	I1124 03:04:51.514088  532332 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:04:51.554220  532332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:04:51.559365  532332 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:04:51.559431  532332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:04:51.585686  532332 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:04:51.585709  532332 start.go:496] detecting cgroup driver to use...
	I1124 03:04:51.585741  532332 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:04:51.585792  532332 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:04:51.603393  532332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:04:51.615125  532332 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:04:51.615172  532332 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:04:51.631872  532332 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:04:51.652239  532332 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:04:51.744823  532332 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:04:51.835335  532332 docker.go:234] disabling docker service ...
	I1124 03:04:51.835402  532332 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:04:51.855756  532332 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:04:51.868484  532332 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:04:51.957474  532332 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:04:52.054255  532332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:04:52.069129  532332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:04:52.085089  532332 binary.go:59] Skipping Kubernetes binary download due to --no-kubernetes flag
	I1124 03:04:52.085128  532332 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1124 03:04:52.085166  532332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:52.094693  532332 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:04:52.094744  532332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:52.103129  532332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:52.111535  532332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:52.119974  532332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:04:52.127436  532332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:04:52.134805  532332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:04:52.142852  532332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:04:52.229533  532332 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:04:52.357500  532332 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:04:52.357586  532332 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:04:52.362122  532332 start.go:564] Will wait 60s for crictl version
	I1124 03:04:52.362242  532332 ssh_runner.go:195] Run: which crictl
	I1124 03:04:52.367151  532332 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:04:52.397231  532332 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:04:52.397323  532332 ssh_runner.go:195] Run: crio --version
	I1124 03:04:52.429480  532332 ssh_runner.go:195] Run: crio --version
	I1124 03:04:52.461670  532332 out.go:179] * Preparing CRI-O 1.34.2 ...
	I1124 03:04:52.462916  532332 ssh_runner.go:195] Run: rm -f paused
	I1124 03:04:52.468006  532332 out.go:179] * Done! minikube is ready without Kubernetes!
	I1124 03:04:52.471679  532332 out.go:203] ╭───────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                       │
	│                        * Things to try without Kubernetes ...                         │
	│                                                                                       │
	│    - "minikube ssh" to SSH into minikube's node.                                      │
	│    - "minikube podman-env" to point your podman-cli to the podman inside minikube.    │
	│    - "minikube image" to build images without docker.                                 │
	│                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:04:50.752563  532081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:04:50.753244  532081 addons.go:530] duration metric: took 3.599595ms for enable addons: enabled=[]
	I1124 03:04:50.891301  532081 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:04:50.909039  532081 node_ready.go:35] waiting up to 6m0s for node "pause-530927" to be "Ready" ...
	I1124 03:04:50.918277  532081 node_ready.go:49] node "pause-530927" is "Ready"
	I1124 03:04:50.918305  532081 node_ready.go:38] duration metric: took 9.233245ms for node "pause-530927" to be "Ready" ...
	I1124 03:04:50.918319  532081 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:04:50.918363  532081 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:04:50.932103  532081 api_server.go:72] duration metric: took 182.52714ms to wait for apiserver process to appear ...
	I1124 03:04:50.932135  532081 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:04:50.932160  532081 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:04:50.938185  532081 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:04:50.939287  532081 api_server.go:141] control plane version: v1.34.1
	I1124 03:04:50.939322  532081 api_server.go:131] duration metric: took 7.179044ms to wait for apiserver health ...
	I1124 03:04:50.939364  532081 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:04:50.944498  532081 system_pods.go:59] 7 kube-system pods found
	I1124 03:04:50.944549  532081 system_pods.go:61] "coredns-66bc5c9577-tdqkw" [90051f5e-289d-4499-8b52-d8bc68631512] Running
	I1124 03:04:50.944563  532081 system_pods.go:61] "etcd-pause-530927" [f8386c07-1b10-4f37-a499-af2c60238b49] Running
	I1124 03:04:50.944570  532081 system_pods.go:61] "kindnet-w4w8g" [fd0ed429-3c79-45e1-98f4-79bde55a3425] Running
	I1124 03:04:50.944576  532081 system_pods.go:61] "kube-apiserver-pause-530927" [d7e53534-2baf-4430-b77b-cc959464c6de] Running
	I1124 03:04:50.944583  532081 system_pods.go:61] "kube-controller-manager-pause-530927" [8e1164c0-c745-4765-98ba-8939576b1aeb] Running
	I1124 03:04:50.944588  532081 system_pods.go:61] "kube-proxy-csp5q" [7d9650e0-fcfa-4781-b68f-14f737fafd40] Running
	I1124 03:04:50.944593  532081 system_pods.go:61] "kube-scheduler-pause-530927" [85b700a7-7338-4645-b969-324b5863c89d] Running
	I1124 03:04:50.944601  532081 system_pods.go:74] duration metric: took 5.161299ms to wait for pod list to return data ...
	I1124 03:04:50.944615  532081 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:04:50.948070  532081 default_sa.go:45] found service account: "default"
	I1124 03:04:50.948089  532081 default_sa.go:55] duration metric: took 3.46692ms for default service account to be created ...
	I1124 03:04:50.948100  532081 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:04:50.951514  532081 system_pods.go:86] 7 kube-system pods found
	I1124 03:04:50.951585  532081 system_pods.go:89] "coredns-66bc5c9577-tdqkw" [90051f5e-289d-4499-8b52-d8bc68631512] Running
	I1124 03:04:50.951596  532081 system_pods.go:89] "etcd-pause-530927" [f8386c07-1b10-4f37-a499-af2c60238b49] Running
	I1124 03:04:50.951602  532081 system_pods.go:89] "kindnet-w4w8g" [fd0ed429-3c79-45e1-98f4-79bde55a3425] Running
	I1124 03:04:50.951607  532081 system_pods.go:89] "kube-apiserver-pause-530927" [d7e53534-2baf-4430-b77b-cc959464c6de] Running
	I1124 03:04:50.951612  532081 system_pods.go:89] "kube-controller-manager-pause-530927" [8e1164c0-c745-4765-98ba-8939576b1aeb] Running
	I1124 03:04:50.951617  532081 system_pods.go:89] "kube-proxy-csp5q" [7d9650e0-fcfa-4781-b68f-14f737fafd40] Running
	I1124 03:04:50.951622  532081 system_pods.go:89] "kube-scheduler-pause-530927" [85b700a7-7338-4645-b969-324b5863c89d] Running
	I1124 03:04:50.951630  532081 system_pods.go:126] duration metric: took 3.483015ms to wait for k8s-apps to be running ...
	I1124 03:04:50.951638  532081 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:04:50.951684  532081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:04:50.965224  532081 system_svc.go:56] duration metric: took 13.57793ms WaitForService to wait for kubelet
	I1124 03:04:50.965249  532081 kubeadm.go:587] duration metric: took 215.68665ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:04:50.965275  532081 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:04:50.967759  532081 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:04:50.967792  532081 node_conditions.go:123] node cpu capacity is 8
	I1124 03:04:50.967814  532081 node_conditions.go:105] duration metric: took 2.533293ms to run NodePressure ...
	I1124 03:04:50.967830  532081 start.go:242] waiting for startup goroutines ...
	I1124 03:04:50.967842  532081 start.go:247] waiting for cluster config update ...
	I1124 03:04:50.967853  532081 start.go:256] writing updated cluster config ...
	I1124 03:04:50.968221  532081 ssh_runner.go:195] Run: rm -f paused
	I1124 03:04:50.972539  532081 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:04:50.973246  532081 kapi.go:59] client config for pause-530927: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21975-345525/.minikube/profiles/pause-530927/client.crt", KeyFile:"/home/jenkins/minikube-integration/21975-345525/.minikube/profiles/pause-530927/client.key", CAFile:"/home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1124 03:04:50.975794  532081 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-tdqkw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:04:50.979745  532081 pod_ready.go:94] pod "coredns-66bc5c9577-tdqkw" is "Ready"
	I1124 03:04:50.979765  532081 pod_ready.go:86] duration metric: took 3.949048ms for pod "coredns-66bc5c9577-tdqkw" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:04:50.981829  532081 pod_ready.go:83] waiting for pod "etcd-pause-530927" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:04:50.985980  532081 pod_ready.go:94] pod "etcd-pause-530927" is "Ready"
	I1124 03:04:50.985998  532081 pod_ready.go:86] duration metric: took 4.14929ms for pod "etcd-pause-530927" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:04:50.988070  532081 pod_ready.go:83] waiting for pod "kube-apiserver-pause-530927" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:04:50.992246  532081 pod_ready.go:94] pod "kube-apiserver-pause-530927" is "Ready"
	I1124 03:04:50.992267  532081 pod_ready.go:86] duration metric: took 4.174261ms for pod "kube-apiserver-pause-530927" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:04:50.994013  532081 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-530927" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:04:51.378008  532081 pod_ready.go:94] pod "kube-controller-manager-pause-530927" is "Ready"
	I1124 03:04:51.378041  532081 pod_ready.go:86] duration metric: took 384.009447ms for pod "kube-controller-manager-pause-530927" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:04:51.577162  532081 pod_ready.go:83] waiting for pod "kube-proxy-csp5q" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:04:51.976771  532081 pod_ready.go:94] pod "kube-proxy-csp5q" is "Ready"
	I1124 03:04:51.976796  532081 pod_ready.go:86] duration metric: took 399.605018ms for pod "kube-proxy-csp5q" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:04:52.177231  532081 pod_ready.go:83] waiting for pod "kube-scheduler-pause-530927" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:04:52.577123  532081 pod_ready.go:94] pod "kube-scheduler-pause-530927" is "Ready"
	I1124 03:04:52.577150  532081 pod_ready.go:86] duration metric: took 399.889765ms for pod "kube-scheduler-pause-530927" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:04:52.577165  532081 pod_ready.go:40] duration metric: took 1.604595599s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:04:52.635909  532081 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:04:52.640780  532081 out.go:179] * Done! kubectl is now configured to use "pause-530927" cluster and "default" namespace by default
	I1124 03:04:51.074235  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/force-systemd-flag-597158/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1124 03:04:51.074333  533070 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/force-systemd-flag-597158/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:04:51.101817  533070 cli_runner.go:164] Run: docker container inspect force-systemd-flag-597158 --format={{.State.Status}}
	I1124 03:04:51.122961  533070 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:04:51.122979  533070 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-597158 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:04:51.173278  533070 cli_runner.go:164] Run: docker container inspect force-systemd-flag-597158 --format={{.State.Status}}
	I1124 03:04:51.199535  533070 machine.go:94] provisionDockerMachine start ...
	I1124 03:04:51.199649  533070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-597158
	I1124 03:04:51.220309  533070 main.go:143] libmachine: Using SSH client type: native
	I1124 03:04:51.220599  533070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1124 03:04:51.220610  533070 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:04:51.366084  533070 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-597158
	
	I1124 03:04:51.366115  533070 ubuntu.go:182] provisioning hostname "force-systemd-flag-597158"
	I1124 03:04:51.366179  533070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-597158
	I1124 03:04:51.387453  533070 main.go:143] libmachine: Using SSH client type: native
	I1124 03:04:51.387731  533070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1124 03:04:51.387749  533070 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-597158 && echo "force-systemd-flag-597158" | sudo tee /etc/hostname
	I1124 03:04:51.543072  533070 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-597158
	
	I1124 03:04:51.543164  533070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-597158
	I1124 03:04:51.564950  533070 main.go:143] libmachine: Using SSH client type: native
	I1124 03:04:51.565169  533070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1124 03:04:51.565192  533070 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-597158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-597158/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-597158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:04:51.706442  533070 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:04:51.706481  533070 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:04:51.706521  533070 ubuntu.go:190] setting up certificates
	I1124 03:04:51.706537  533070 provision.go:84] configureAuth start
	I1124 03:04:51.706601  533070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-597158
	I1124 03:04:51.724472  533070 provision.go:143] copyHostCerts
	I1124 03:04:51.724504  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:04:51.724545  533070 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:04:51.724554  533070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:04:51.724612  533070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:04:51.724708  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:04:51.724744  533070 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:04:51.724752  533070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:04:51.724782  533070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:04:51.725376  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:04:51.725423  533070 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:04:51.725429  533070 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:04:51.725475  533070 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:04:51.725567  533070 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.force-systemd-flag-597158 san=[127.0.0.1 192.168.76.2 force-systemd-flag-597158 localhost minikube]
	I1124 03:04:51.808808  533070 provision.go:177] copyRemoteCerts
	I1124 03:04:51.808864  533070 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:04:51.808909  533070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-597158
	I1124 03:04:51.826754  533070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/force-systemd-flag-597158/id_rsa Username:docker}
	I1124 03:04:51.927547  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1124 03:04:51.927602  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:04:51.947694  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1124 03:04:51.947761  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:04:51.965611  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1124 03:04:51.965664  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1124 03:04:51.984679  533070 provision.go:87] duration metric: took 278.126574ms to configureAuth
	I1124 03:04:51.984706  533070 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:04:51.984929  533070 config.go:182] Loaded profile config "force-systemd-flag-597158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:04:51.985052  533070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-597158
	I1124 03:04:52.011769  533070 main.go:143] libmachine: Using SSH client type: native
	I1124 03:04:52.012086  533070 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33373 <nil> <nil>}
	I1124 03:04:52.012114  533070 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:04:52.297133  533070 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:04:52.297163  533070 machine.go:97] duration metric: took 1.097603018s to provisionDockerMachine
	I1124 03:04:52.297178  533070 client.go:176] duration metric: took 6.004071733s to LocalClient.Create
	I1124 03:04:52.297203  533070 start.go:167] duration metric: took 6.004144829s to libmachine.API.Create "force-systemd-flag-597158"
	I1124 03:04:52.297220  533070 start.go:293] postStartSetup for "force-systemd-flag-597158" (driver="docker")
	I1124 03:04:52.297236  533070 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:04:52.297308  533070 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:04:52.297384  533070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-597158
	I1124 03:04:52.316975  533070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/force-systemd-flag-597158/id_rsa Username:docker}
	I1124 03:04:52.425033  533070 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:04:52.429070  533070 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:04:52.429104  533070 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:04:52.429117  533070 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:04:52.429163  533070 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:04:52.429267  533070 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:04:52.429281  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> /etc/ssl/certs/3490782.pem
	I1124 03:04:52.429437  533070 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:04:52.438166  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:04:52.459754  533070 start.go:296] duration metric: took 162.518011ms for postStartSetup
	I1124 03:04:52.460471  533070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-597158
	I1124 03:04:52.479472  533070 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/config.json ...
	I1124 03:04:52.479736  533070 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:04:52.479790  533070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-597158
	I1124 03:04:52.499930  533070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/force-systemd-flag-597158/id_rsa Username:docker}
	I1124 03:04:52.601080  533070 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:04:52.606513  533070 start.go:128] duration metric: took 6.318623288s to createHost
	I1124 03:04:52.606538  533070 start.go:83] releasing machines lock for "force-systemd-flag-597158", held for 6.318760107s
	I1124 03:04:52.606612  533070 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-597158
	I1124 03:04:52.629078  533070 ssh_runner.go:195] Run: cat /version.json
	I1124 03:04:52.629134  533070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-597158
	I1124 03:04:52.629184  533070 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:04:52.629292  533070 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-597158
	I1124 03:04:52.654794  533070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/force-systemd-flag-597158/id_rsa Username:docker}
	I1124 03:04:52.655024  533070 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33373 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/force-systemd-flag-597158/id_rsa Username:docker}
	I1124 03:04:52.838496  533070 ssh_runner.go:195] Run: systemctl --version
	I1124 03:04:52.846596  533070 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:04:52.892838  533070 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:04:52.898511  533070 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:04:52.898576  533070 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:04:52.925848  533070 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:04:52.925872  533070 start.go:496] detecting cgroup driver to use...
	I1124 03:04:52.925909  533070 start.go:500] using "systemd" cgroup driver as enforced via flags
	I1124 03:04:52.925971  533070 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:04:52.942922  533070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:04:52.957578  533070 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:04:52.957665  533070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:04:52.977283  533070 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:04:52.998993  533070 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:04:53.117516  533070 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:04:53.227317  533070 docker.go:234] disabling docker service ...
	I1124 03:04:53.227384  533070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:04:53.245816  533070 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:04:53.258571  533070 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:04:53.346604  533070 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:04:53.443045  533070 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:04:53.460379  533070 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:04:53.474787  533070 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:04:53.474843  533070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:53.485399  533070 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:04:53.485458  533070 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:53.493789  533070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:53.501746  533070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:53.510025  533070 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:04:53.517504  533070 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:53.526092  533070 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:53.540023  533070 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:04:53.549827  533070 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:04:53.558146  533070 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:04:53.566489  533070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:04:53.651362  533070 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:04:53.792235  533070 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:04:53.792299  533070 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:04:53.796674  533070 start.go:564] Will wait 60s for crictl version
	I1124 03:04:53.796727  533070 ssh_runner.go:195] Run: which crictl
	I1124 03:04:53.800512  533070 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:04:53.826308  533070 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:04:53.826385  533070 ssh_runner.go:195] Run: crio --version
	I1124 03:04:53.857691  533070 ssh_runner.go:195] Run: crio --version
	I1124 03:04:53.891108  533070 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:04:53.892190  533070 cli_runner.go:164] Run: docker network inspect force-systemd-flag-597158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:04:53.909406  533070 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:04:53.913387  533070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:04:53.923767  533070 kubeadm.go:884] updating cluster {Name:force-systemd-flag-597158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-597158 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:04:53.923880  533070 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:04:53.923937  533070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:04:53.956102  533070 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:04:53.956121  533070 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:04:53.956160  533070 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:04:53.980102  533070 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:04:53.980130  533070 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:04:53.980139  533070 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1124 03:04:53.980254  533070 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=force-systemd-flag-597158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-597158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:04:53.980325  533070 ssh_runner.go:195] Run: crio config
	I1124 03:04:54.028170  533070 cni.go:84] Creating CNI manager for ""
	I1124 03:04:54.028204  533070 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:04:54.028227  533070 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:04:54.028260  533070 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:force-systemd-flag-597158 NodeName:force-systemd-flag-597158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:04:54.028438  533070 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "force-systemd-flag-597158"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:04:54.028513  533070 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:04:54.037460  533070 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:04:54.037515  533070 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:04:54.047385  533070 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1124 03:04:54.062370  533070 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:04:54.079707  533070 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1124 03:04:54.092838  533070 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:04:54.096440  533070 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:04:54.106698  533070 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:04:54.198563  533070 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:04:54.218181  533070 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158 for IP: 192.168.76.2
	I1124 03:04:54.218202  533070 certs.go:195] generating shared ca certs ...
	I1124 03:04:54.218222  533070 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:04:54.218386  533070 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:04:54.218445  533070 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:04:54.218459  533070 certs.go:257] generating profile certs ...
	I1124 03:04:54.218533  533070 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/client.key
	I1124 03:04:54.218552  533070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/client.crt with IP's: []
	I1124 03:04:54.384000  533070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/client.crt ...
	I1124 03:04:54.384023  533070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/client.crt: {Name:mk85d1cff94b6521cdd6795c52b4f489d623b8b8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:04:54.384181  533070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/client.key ...
	I1124 03:04:54.384195  533070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/client.key: {Name:mk6ffd4a040723155f570bde785c1dbb3bff29e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:04:54.384291  533070 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/apiserver.key.7cfa23c2
	I1124 03:04:54.384306  533070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/apiserver.crt.7cfa23c2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 03:04:54.587360  533070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/apiserver.crt.7cfa23c2 ...
	I1124 03:04:54.587382  533070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/apiserver.crt.7cfa23c2: {Name:mk4a904a59eb0fa1d280325dad93dbfcce6ecd82 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:04:54.587542  533070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/apiserver.key.7cfa23c2 ...
	I1124 03:04:54.587561  533070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/apiserver.key.7cfa23c2: {Name:mkb84d8fba0c243480ba566c8adefc82c756051f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:04:54.587678  533070 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/apiserver.crt.7cfa23c2 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/apiserver.crt
	I1124 03:04:54.587751  533070 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/apiserver.key.7cfa23c2 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/apiserver.key
	I1124 03:04:54.587804  533070 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/proxy-client.key
	I1124 03:04:54.587819  533070 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/proxy-client.crt with IP's: []
	I1124 03:04:54.618468  533070 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/proxy-client.crt ...
	I1124 03:04:54.618487  533070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/proxy-client.crt: {Name:mk02a3a1803f9b7d373d8b51a23b3b4d49a48c07 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:04:54.618617  533070 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/proxy-client.key ...
	I1124 03:04:54.618630  533070 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/proxy-client.key: {Name:mkd7181dfaf05c68adbe805d8d8bc6643c01acbb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:04:54.618733  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1124 03:04:54.618756  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1124 03:04:54.618768  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1124 03:04:54.618782  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1124 03:04:54.618794  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1124 03:04:54.618806  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1124 03:04:54.618821  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1124 03:04:54.618840  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1124 03:04:54.618895  533070 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:04:54.618928  533070 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:04:54.618938  533070 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:04:54.618966  533070 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:04:54.618992  533070 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:04:54.619013  533070 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:04:54.619060  533070 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:04:54.619096  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:04:54.619118  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem -> /usr/share/ca-certificates/349078.pem
	I1124 03:04:54.619134  533070 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> /usr/share/ca-certificates/3490782.pem
	I1124 03:04:54.619731  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:04:54.637843  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:04:54.655029  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:04:54.672994  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:04:54.689873  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1124 03:04:54.707761  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:04:54.725143  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:04:54.742330  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:04:54.759220  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:04:54.778803  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:04:54.796792  533070 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:04:54.813944  533070 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:04:54.826101  533070 ssh_runner.go:195] Run: openssl version
	I1124 03:04:54.831812  533070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:04:54.839618  533070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:04:54.843062  533070 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:04:54.843105  533070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:04:54.878297  533070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:04:54.888702  533070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:04:54.898667  533070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:04:54.902945  533070 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:04:54.903002  533070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:04:54.943229  533070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:04:54.953032  533070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:04:54.962953  533070 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:04:54.967411  533070 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:04:54.967472  533070 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:04:55.025971  533070 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:04:55.034677  533070 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:04:55.038855  533070 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:04:55.038935  533070 kubeadm.go:401] StartCluster: {Name:force-systemd-flag-597158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:force-systemd-flag-597158 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSoc
k: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:04:55.039019  533070 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:04:55.039072  533070 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:04:55.072976  533070 cri.go:89] found id: ""
	I1124 03:04:55.073057  533070 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:04:55.081938  533070 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:04:55.090818  533070 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:04:55.090860  533070 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:04:55.098711  533070 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:04:55.098726  533070 kubeadm.go:158] found existing configuration files:
	
	I1124 03:04:55.098773  533070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:04:55.106981  533070 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:04:55.107035  533070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:04:55.115695  533070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:04:55.123351  533070 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:04:55.123401  533070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:04:55.131540  533070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:04:55.139685  533070 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:04:55.139740  533070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:04:55.147743  533070 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:04:55.155963  533070 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:04:55.156007  533070 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:04:55.163489  533070 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:04:55.205729  533070 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:04:55.205819  533070 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:04:55.227306  533070 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:04:55.227391  533070 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:04:55.227438  533070 kubeadm.go:319] OS: Linux
	I1124 03:04:55.227503  533070 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:04:55.227572  533070 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:04:55.227677  533070 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:04:55.227771  533070 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:04:55.227842  533070 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:04:55.227936  533070 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:04:55.228016  533070 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:04:55.228087  533070 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:04:55.299660  533070 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:04:55.299839  533070 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:04:55.299973  533070 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:04:55.307297  533070 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.421914023Z" level=info msg="RDT not available in the host system"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.421928609Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.422710235Z" level=info msg="Conmon does support the --sync option"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.422727397Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.422739916Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.42347793Z" level=info msg="Conmon does support the --sync option"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.423491996Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.428572765Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.428601564Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.429419881Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.429951879Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.430016653Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.506640561Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-tdqkw Namespace:kube-system ID:3cc380c559a6d6120a1caba210d34237e93c5636a4cd3c3c9a163f20ce94b0d9 UID:90051f5e-289d-4499-8b52-d8bc68631512 NetNS:/var/run/netns/6a5ff240-b5d1-4bec-ade2-b8f527323095 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006b4148}] Aliases:map[]}"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.506874736Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-tdqkw for CNI network kindnet (type=ptp)"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507375473Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507406933Z" level=info msg="Starting seccomp notifier watcher"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507477906Z" level=info msg="Create NRI interface"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507616495Z" level=info msg="built-in NRI default validator is disabled"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507632958Z" level=info msg="runtime interface created"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507648556Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507657208Z" level=info msg="runtime interface starting up..."
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507665411Z" level=info msg="starting plugins..."
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507681155Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.508130956Z" level=info msg="No systemd watchdog enabled"
	Nov 24 03:04:49 pause-530927 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	e2563685bbbb7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   0                   3cc380c559a6d       coredns-66bc5c9577-tdqkw               kube-system
	13c884591858d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   01f97aa2007c4       kindnet-w4w8g                          kube-system
	2a467466a32f3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   25 seconds ago      Running             kube-proxy                0                   dbcc442f87a8f       kube-proxy-csp5q                       kube-system
	44717dfd0dd19       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   37 seconds ago      Running             kube-apiserver            0                   7f9c132d09b04       kube-apiserver-pause-530927            kube-system
	6c0c28404a489       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   37 seconds ago      Running             kube-controller-manager   0                   85a1a0fba1b46       kube-controller-manager-pause-530927   kube-system
	21c0f7b1624aa       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   37 seconds ago      Running             kube-scheduler            0                   0729599a5fd23       kube-scheduler-pause-530927            kube-system
	db979218070c8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   37 seconds ago      Running             etcd                      0                   e2fb6fe422c65       etcd-pause-530927                      kube-system
	
	
	==> coredns [e2563685bbbb787f2dcaa7031d98f881b07cff82aa3418fdff322f7ded3a781d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46610 - 59883 "HINFO IN 2324472139877320416.2566359676961692489. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.124038409s
	
	
	==> describe nodes <==
	Name:               pause-530927
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-530927
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=pause-530927
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_04_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:04:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-530927
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:04:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:04:44 +0000   Mon, 24 Nov 2025 03:04:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:04:44 +0000   Mon, 24 Nov 2025 03:04:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:04:44 +0000   Mon, 24 Nov 2025 03:04:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:04:44 +0000   Mon, 24 Nov 2025 03:04:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-530927
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                f54b67fa-e1f5-4877-8699-f78bd6737b17
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-tdqkw                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-530927                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-w4w8g                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-530927             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-pause-530927    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-csp5q                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-530927             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 25s                kube-proxy       
	  Normal  NodeHasSufficientMemory  38s (x8 over 38s)  kubelet          Node pause-530927 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x8 over 38s)  kubelet          Node pause-530927 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x8 over 38s)  kubelet          Node pause-530927 status is now: NodeHasSufficientPID
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s                kubelet          Node pause-530927 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s                kubelet          Node pause-530927 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s                kubelet          Node pause-530927 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s                node-controller  Node pause-530927 event: Registered Node pause-530927 in Controller
	  Normal  NodeReady                15s                kubelet          Node pause-530927 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 a4 5e 1f c0 90 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 ca fc 5f 92 50 08 06
	[Nov24 02:26] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.010203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023866] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +2.047771] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[Nov24 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +8.191144] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[ +16.382391] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[ +32.252621] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	
	
	==> etcd [db979218070c8912540c318fd1e65becb327b18a39073bb2f0cb3c7e22ec95cb] <==
	{"level":"warn","ts":"2025-11-24T03:04:21.071931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.088823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.098240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.108108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.121000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.128682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.139800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.149052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.169193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.173745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.183138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.193934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.288231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:28.657868Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.89196ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" limit:1 ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2025-11-24T03:04:28.657993Z","caller":"traceutil/trace.go:172","msg":"trace[1170804069] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:327; }","duration":"159.048387ms","start":"2025-11-24T03:04:28.498923Z","end":"2025-11-24T03:04:28.657972Z","steps":["trace[1170804069] 'range keys from in-memory index tree'  (duration: 158.751597ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:04:28.999398Z","caller":"traceutil/trace.go:172","msg":"trace[545788078] transaction","detail":"{read_only:false; response_revision:329; number_of_response:1; }","duration":"106.586137ms","start":"2025-11-24T03:04:28.892793Z","end":"2025-11-24T03:04:28.999379Z","steps":["trace[545788078] 'process raft request'  (duration: 106.457886ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:04:29.190356Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.980999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T03:04:29.190530Z","caller":"traceutil/trace.go:172","msg":"trace[1800801860] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices; range_end:; response_count:0; response_revision:330; }","duration":"102.43643ms","start":"2025-11-24T03:04:29.088054Z","end":"2025-11-24T03:04:29.190491Z","steps":["trace[1800801860] 'range keys from in-memory index tree'  (duration: 88.235794ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:04:29.466863Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.468716ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" limit:1 ","response":"range_response_count:1 size:197"}
	{"level":"info","ts":"2025-11-24T03:04:29.466947Z","caller":"traceutil/trace.go:172","msg":"trace[829305916] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:332; }","duration":"123.554918ms","start":"2025-11-24T03:04:29.343376Z","end":"2025-11-24T03:04:29.466931Z","steps":["trace[829305916] 'range keys from in-memory index tree'  (duration: 123.352872ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:04:29.466953Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"174.858012ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-11-24T03:04:29.466986Z","caller":"traceutil/trace.go:172","msg":"trace[512241956] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:332; }","duration":"174.893587ms","start":"2025-11-24T03:04:29.292084Z","end":"2025-11-24T03:04:29.466978Z","steps":["trace[512241956] 'range keys from in-memory index tree'  (duration: 174.618357ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:04:29.466860Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.612186ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/pause-530927\" limit:1 ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2025-11-24T03:04:29.467056Z","caller":"traceutil/trace.go:172","msg":"trace[1421433851] range","detail":"{range_begin:/registry/leases/kube-node-lease/pause-530927; range_end:; response_count:1; response_revision:332; }","duration":"177.816084ms","start":"2025-11-24T03:04:29.289221Z","end":"2025-11-24T03:04:29.467037Z","steps":["trace[1421433851] 'range keys from in-memory index tree'  (duration: 177.509181ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:04:29.813302Z","caller":"traceutil/trace.go:172","msg":"trace[1764706165] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"114.582346ms","start":"2025-11-24T03:04:29.698703Z","end":"2025-11-24T03:04:29.813285Z","steps":["trace[1764706165] 'process raft request'  (duration: 114.439657ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:04:56 up  1:47,  0 user,  load average: 4.20, 1.94, 1.37
	Linux pause-530927 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [13c884591858d5f3dd598f14e7d0e5092f90334981841d73ce5c724262401f23] <==
	I1124 03:04:30.899385       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:04:30.899838       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 03:04:30.900070       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:04:30.900323       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:04:30.900378       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:04:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:04:31.105841       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:04:31.106052       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:04:31.106101       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:04:31.107347       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:04:31.504440       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:04:31.504474       1 metrics.go:72] Registering metrics
	I1124 03:04:31.504590       1 controller.go:711] "Syncing nftables rules"
	I1124 03:04:41.108980       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:04:41.109053       1 main.go:301] handling current node
	I1124 03:04:51.111036       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:04:51.111090       1 main.go:301] handling current node
	
	
	==> kube-apiserver [44717dfd0dd19f67f9d5242ec3bdfd8f3ef090f8003cf2ea62ea03eadec418a2] <==
	I1124 03:04:22.256536       1 policy_source.go:240] refreshing policies
	I1124 03:04:22.258133       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:04:22.340604       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:04:22.340694       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:04:22.345311       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:04:22.345474       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:04:22.446225       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:04:23.109216       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:04:23.113485       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:04:23.113505       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:04:23.603180       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:04:23.636247       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:04:23.714031       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:04:23.718924       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 03:04:23.719858       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:04:23.723931       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:04:24.145595       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:04:24.599707       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:04:24.606813       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:04:24.614125       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:04:29.698181       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:04:30.056685       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:04:30.066540       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:04:30.102467       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:04:30.102467       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [6c0c28404a4898860dddc52df5afabd341a06204e739de342cedcd93491a2e47] <==
	I1124 03:04:29.148353       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 03:04:29.148359       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:04:29.148604       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 03:04:29.150525       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:04:29.153766       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 03:04:29.158987       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 03:04:29.159088       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:04:29.164344       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 03:04:29.169599       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 03:04:29.171953       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 03:04:29.173139       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 03:04:29.173207       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 03:04:29.173229       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 03:04:29.175432       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 03:04:29.180678       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 03:04:29.203707       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-530927" podCIDRs=["10.244.0.0/24"]
	I1124 03:04:29.208824       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:04:29.245122       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:04:29.249139       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:04:29.264453       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:04:29.287640       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:04:29.287657       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:04:29.287664       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:04:29.287687       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:04:44.103694       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2a467466a32f30878109a8f26060e1d828a3a95ad82d66388beab1f18922dc66] <==
	I1124 03:04:30.692858       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:04:30.764647       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:04:30.865476       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:04:30.865538       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 03:04:30.865662       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:04:30.894348       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:04:30.894410       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:04:30.901639       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:04:30.902123       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:04:30.903304       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:04:30.904715       1 config.go:200] "Starting service config controller"
	I1124 03:04:30.904733       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:04:30.904787       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:04:30.904799       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:04:30.904818       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:04:30.904823       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:04:30.905168       1 config.go:309] "Starting node config controller"
	I1124 03:04:30.905225       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:04:31.005458       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:04:31.005434       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:04:31.005554       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:04:31.005585       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [21c0f7b1624aac3029a7217aacc15d8049cdbf93ea484ccfbaa5ca8134d8c67b] <==
	E1124 03:04:22.229028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:04:22.229093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:04:22.229154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:04:22.229240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:04:22.229249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:04:22.229436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:04:22.229439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:04:22.229750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:04:22.228701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:04:22.229926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:04:22.230322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:04:22.230803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:04:22.230762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:04:22.231601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:04:22.231854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:04:23.080773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:04:23.104183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:04:23.216072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 03:04:23.216656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:04:23.274118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:04:23.290541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:04:23.307439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:04:23.337982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:04:23.351244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1124 03:04:25.622134       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:04:41 pause-530927 kubelet[1306]: I1124 03:04:41.333399    1306 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:04:41 pause-530927 kubelet[1306]: I1124 03:04:41.452742    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90051f5e-289d-4499-8b52-d8bc68631512-config-volume\") pod \"coredns-66bc5c9577-tdqkw\" (UID: \"90051f5e-289d-4499-8b52-d8bc68631512\") " pod="kube-system/coredns-66bc5c9577-tdqkw"
	Nov 24 03:04:41 pause-530927 kubelet[1306]: I1124 03:04:41.452784    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpp2n\" (UniqueName: \"kubernetes.io/projected/90051f5e-289d-4499-8b52-d8bc68631512-kube-api-access-kpp2n\") pod \"coredns-66bc5c9577-tdqkw\" (UID: \"90051f5e-289d-4499-8b52-d8bc68631512\") " pod="kube-system/coredns-66bc5c9577-tdqkw"
	Nov 24 03:04:42 pause-530927 kubelet[1306]: I1124 03:04:42.551312    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tdqkw" podStartSLOduration=12.551290374 podStartE2EDuration="12.551290374s" podCreationTimestamp="2025-11-24 03:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:04:42.551165262 +0000 UTC m=+18.203605091" watchObservedRunningTime="2025-11-24 03:04:42.551290374 +0000 UTC m=+18.203730180"
	Nov 24 03:04:48 pause-530927 kubelet[1306]: W1124 03:04:48.457714    1306 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 03:04:48 pause-530927 kubelet[1306]: E1124 03:04:48.457807    1306 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 24 03:04:48 pause-530927 kubelet[1306]: E1124 03:04:48.457932    1306 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:48 pause-530927 kubelet[1306]: E1124 03:04:48.457959    1306 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:48 pause-530927 kubelet[1306]: E1124 03:04:48.457978    1306 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:48 pause-530927 kubelet[1306]: E1124 03:04:48.549331    1306 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 24 03:04:48 pause-530927 kubelet[1306]: E1124 03:04:48.549398    1306 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:48 pause-530927 kubelet[1306]: E1124 03:04:48.549421    1306 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:48 pause-530927 kubelet[1306]: W1124 03:04:48.558578    1306 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 03:04:48 pause-530927 kubelet[1306]: W1124 03:04:48.688799    1306 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 03:04:48 pause-530927 kubelet[1306]: W1124 03:04:48.919481    1306 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 03:04:49 pause-530927 kubelet[1306]: W1124 03:04:49.278203    1306 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 03:04:49 pause-530927 kubelet[1306]: E1124 03:04:49.473604    1306 log.go:32] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:49 pause-530927 kubelet[1306]: E1124 03:04:49.473657    1306 kubelet.go:2996] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:49 pause-530927 kubelet[1306]: E1124 03:04:49.550338    1306 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 24 03:04:49 pause-530927 kubelet[1306]: E1124 03:04:49.550390    1306 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:49 pause-530927 kubelet[1306]: E1124 03:04:49.550405    1306 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:53 pause-530927 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 03:04:53 pause-530927 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 03:04:53 pause-530927 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 03:04:53 pause-530927 systemd[1]: kubelet.service: Consumed 1.142s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-530927 -n pause-530927
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-530927 -n pause-530927: exit status 2 (357.713404ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-530927 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-530927
helpers_test.go:243: (dbg) docker inspect pause-530927:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0d817ee5c958310abd9b28f4021fa6d7a299f0a6a9182cf2162077d4689b8d6d",
	        "Created": "2025-11-24T03:04:04.657703071Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 519703,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:04:04.712800156Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/0d817ee5c958310abd9b28f4021fa6d7a299f0a6a9182cf2162077d4689b8d6d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0d817ee5c958310abd9b28f4021fa6d7a299f0a6a9182cf2162077d4689b8d6d/hostname",
	        "HostsPath": "/var/lib/docker/containers/0d817ee5c958310abd9b28f4021fa6d7a299f0a6a9182cf2162077d4689b8d6d/hosts",
	        "LogPath": "/var/lib/docker/containers/0d817ee5c958310abd9b28f4021fa6d7a299f0a6a9182cf2162077d4689b8d6d/0d817ee5c958310abd9b28f4021fa6d7a299f0a6a9182cf2162077d4689b8d6d-json.log",
	        "Name": "/pause-530927",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-530927:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-530927",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0d817ee5c958310abd9b28f4021fa6d7a299f0a6a9182cf2162077d4689b8d6d",
	                "LowerDir": "/var/lib/docker/overlay2/569a1925d1d3ee07e61a9f64ecaa073fdfe0036dc354dbc0cfc70c5d6329014f-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/569a1925d1d3ee07e61a9f64ecaa073fdfe0036dc354dbc0cfc70c5d6329014f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/569a1925d1d3ee07e61a9f64ecaa073fdfe0036dc354dbc0cfc70c5d6329014f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/569a1925d1d3ee07e61a9f64ecaa073fdfe0036dc354dbc0cfc70c5d6329014f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-530927",
	                "Source": "/var/lib/docker/volumes/pause-530927/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-530927",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-530927",
	                "name.minikube.sigs.k8s.io": "pause-530927",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c25e9c6244fae20c993ea79082f112ec156855501d6cd036e63d2878bb1240ce",
	            "SandboxKey": "/var/run/docker/netns/c25e9c6244fa",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33353"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33354"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33357"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33355"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33356"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-530927": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9643fba55f3767c656603fb76039e52ab1853a2b57e72d597a928e3fcfc47a32",
	                    "EndpointID": "4b28d9ff5c90eb47c4734ac667319dc8e5a2ce80bc8a9ba6f5cb4a257da2be87",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "f6:e6:b1:c9:30:41",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-530927",
	                        "0d817ee5c958"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-530927 -n pause-530927
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-530927 -n pause-530927: exit status 2 (329.77443ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-530927 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                            ARGS                                                             │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-029934 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-029934       │ jenkins │ v1.37.0 │ 24 Nov 25 03:02 UTC │                     │
	│ stop    │ -p scheduled-stop-029934 --cancel-scheduled                                                                                 │ scheduled-stop-029934       │ jenkins │ v1.37.0 │ 24 Nov 25 03:02 UTC │ 24 Nov 25 03:02 UTC │
	│ stop    │ -p scheduled-stop-029934 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-029934       │ jenkins │ v1.37.0 │ 24 Nov 25 03:02 UTC │                     │
	│ stop    │ -p scheduled-stop-029934 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-029934       │ jenkins │ v1.37.0 │ 24 Nov 25 03:02 UTC │                     │
	│ stop    │ -p scheduled-stop-029934 --schedule 15s -v=5 --alsologtostderr                                                              │ scheduled-stop-029934       │ jenkins │ v1.37.0 │ 24 Nov 25 03:02 UTC │ 24 Nov 25 03:03 UTC │
	│ delete  │ -p scheduled-stop-029934                                                                                                    │ scheduled-stop-029934       │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │ 24 Nov 25 03:03 UTC │
	│ start   │ -p insufficient-storage-628185 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio            │ insufficient-storage-628185 │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │                     │
	│ delete  │ -p insufficient-storage-628185                                                                                              │ insufficient-storage-628185 │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │ 24 Nov 25 03:03 UTC │
	│ start   │ -p pause-530927 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                   │ pause-530927                │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p force-systemd-env-550049 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                  │ force-systemd-env-550049    │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p NoKubernetes-565297 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio               │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │                     │
	│ start   │ -p offline-crio-493654 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio           │ offline-crio-493654         │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p NoKubernetes-565297 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                       │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:03 UTC │ 24 Nov 25 03:04 UTC │
	│ delete  │ -p force-systemd-env-550049                                                                                                 │ force-systemd-env-550049    │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p cert-expiration-062725 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                      │ cert-expiration-062725      │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p NoKubernetes-565297 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ delete  │ -p NoKubernetes-565297                                                                                                      │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ delete  │ -p offline-crio-493654                                                                                                      │ offline-crio-493654         │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p pause-530927 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                            │ pause-530927                │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p NoKubernetes-565297 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio       │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p force-systemd-flag-597158 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ force-systemd-flag-597158   │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │                     │
	│ ssh     │ -p NoKubernetes-565297 sudo systemctl is-active --quiet service kubelet                                                     │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │                     │
	│ pause   │ -p pause-530927 --alsologtostderr -v=5                                                                                      │ pause-530927                │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │                     │
	│ stop    │ -p NoKubernetes-565297                                                                                                      │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │ 24 Nov 25 03:04 UTC │
	│ start   │ -p NoKubernetes-565297 --driver=docker  --container-runtime=crio                                                            │ NoKubernetes-565297         │ jenkins │ v1.37.0 │ 24 Nov 25 03:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:04:55
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:04:55.994964  538790 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:04:55.995227  538790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:04:55.995231  538790 out.go:374] Setting ErrFile to fd 2...
	I1124 03:04:55.995234  538790 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:04:55.995409  538790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:04:55.995785  538790 out.go:368] Setting JSON to false
	I1124 03:04:55.996934  538790 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6443,"bootTime":1763947053,"procs":282,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:04:55.996993  538790 start.go:143] virtualization: kvm guest
	I1124 03:04:55.998818  538790 out.go:179] * [NoKubernetes-565297] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:04:56.000076  538790 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:04:56.000088  538790 notify.go:221] Checking for updates...
	I1124 03:04:56.002119  538790 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:04:56.003377  538790 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:04:56.005666  538790 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:04:56.006864  538790 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:04:56.008014  538790 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:04:55.309401  533070 out.go:252]   - Generating certificates and keys ...
	I1124 03:04:55.309500  533070 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:04:55.309612  533070 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:04:55.562286  533070 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:04:55.664138  533070 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:04:55.910237  533070 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:04:55.978217  533070 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:04:56.009550  538790 config.go:182] Loaded profile config "NoKubernetes-565297": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1124 03:04:56.010105  538790 start.go:1806] No Kubernetes version set for minikube, setting Kubernetes version to v0.0.0
	I1124 03:04:56.010135  538790 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:04:56.036937  538790 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:04:56.037072  538790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:04:56.100755  538790 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 03:04:56.089686873 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:04:56.100881  538790 docker.go:319] overlay module found
	I1124 03:04:56.102121  538790 out.go:179] * Using the docker driver based on existing profile
	I1124 03:04:56.103120  538790 start.go:309] selected driver: docker
	I1124 03:04:56.103127  538790 start.go:927] validating driver "docker" against &{Name:NoKubernetes-565297 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-565297 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:04:56.103202  538790 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:04:56.103279  538790 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:04:56.166247  538790 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 03:04:56.156239945 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:04:56.166821  538790 cni.go:84] Creating CNI manager for ""
	I1124 03:04:56.166877  538790 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:04:56.166977  538790 start.go:353] cluster config:
	{Name:NoKubernetes-565297 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-565297 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:04:56.168305  538790 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-565297
	I1124 03:04:56.170386  538790 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:04:56.171566  538790 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:04:56.172548  538790 preload.go:188] Checking if preload exists for k8s version v0.0.0 and runtime crio
	I1124 03:04:56.172626  538790 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:04:56.196073  538790 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:04:56.196090  538790 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	W1124 03:04:56.210550  538790 preload.go:144] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v0.0.0/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	W1124 03:04:56.391310  538790 preload.go:144] https://github.com/kubernetes-sigs/minikube-preloads/releases/download/v18/preloaded-images-k8s-v18-v0.0.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I1124 03:04:56.391484  538790 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/NoKubernetes-565297/config.json ...
	I1124 03:04:56.391729  538790 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:04:56.391769  538790 start.go:360] acquireMachinesLock for NoKubernetes-565297: {Name:mk2aefd8df43ff94b09a45e0959945a9b92af952 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:04:56.391837  538790 start.go:364] duration metric: took 46.889µs to acquireMachinesLock for "NoKubernetes-565297"
	I1124 03:04:56.391853  538790 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:04:56.391858  538790 fix.go:54] fixHost starting: 
	I1124 03:04:56.392199  538790 cli_runner.go:164] Run: docker container inspect NoKubernetes-565297 --format={{.State.Status}}
	I1124 03:04:56.413940  538790 fix.go:112] recreateIfNeeded on NoKubernetes-565297: state=Stopped err=<nil>
	W1124 03:04:56.413977  538790 fix.go:138] unexpected machine state, will restart: <nil>
	
	
	==> CRI-O <==
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.421914023Z" level=info msg="RDT not available in the host system"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.421928609Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.422710235Z" level=info msg="Conmon does support the --sync option"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.422727397Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.422739916Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.42347793Z" level=info msg="Conmon does support the --sync option"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.423491996Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.428572765Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.428601564Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.429419881Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.429951879Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.430016653Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.506640561Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-tdqkw Namespace:kube-system ID:3cc380c559a6d6120a1caba210d34237e93c5636a4cd3c3c9a163f20ce94b0d9 UID:90051f5e-289d-4499-8b52-d8bc68631512 NetNS:/var/run/netns/6a5ff240-b5d1-4bec-ade2-b8f527323095 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006b4148}] Aliases:map[]}"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.506874736Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-tdqkw for CNI network kindnet (type=ptp)"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507375473Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507406933Z" level=info msg="Starting seccomp notifier watcher"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507477906Z" level=info msg="Create NRI interface"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507616495Z" level=info msg="built-in NRI default validator is disabled"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507632958Z" level=info msg="runtime interface created"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507648556Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507657208Z" level=info msg="runtime interface starting up..."
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507665411Z" level=info msg="starting plugins..."
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.507681155Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 24 03:04:49 pause-530927 crio[2169]: time="2025-11-24T03:04:49.508130956Z" level=info msg="No systemd watchdog enabled"
	Nov 24 03:04:49 pause-530927 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	e2563685bbbb7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 seconds ago      Running             coredns                   0                   3cc380c559a6d       coredns-66bc5c9577-tdqkw               kube-system
	13c884591858d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   27 seconds ago      Running             kindnet-cni               0                   01f97aa2007c4       kindnet-w4w8g                          kube-system
	2a467466a32f3       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   27 seconds ago      Running             kube-proxy                0                   dbcc442f87a8f       kube-proxy-csp5q                       kube-system
	44717dfd0dd19       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   38 seconds ago      Running             kube-apiserver            0                   7f9c132d09b04       kube-apiserver-pause-530927            kube-system
	6c0c28404a489       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   38 seconds ago      Running             kube-controller-manager   0                   85a1a0fba1b46       kube-controller-manager-pause-530927   kube-system
	21c0f7b1624aa       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   38 seconds ago      Running             kube-scheduler            0                   0729599a5fd23       kube-scheduler-pause-530927            kube-system
	db979218070c8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   38 seconds ago      Running             etcd                      0                   e2fb6fe422c65       etcd-pause-530927                      kube-system
	
	
	==> coredns [e2563685bbbb787f2dcaa7031d98f881b07cff82aa3418fdff322f7ded3a781d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46610 - 59883 "HINFO IN 2324472139877320416.2566359676961692489. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.124038409s
	
	
	==> describe nodes <==
	Name:               pause-530927
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-530927
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=pause-530927
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_04_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:04:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-530927
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:04:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:04:44 +0000   Mon, 24 Nov 2025 03:04:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:04:44 +0000   Mon, 24 Nov 2025 03:04:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:04:44 +0000   Mon, 24 Nov 2025 03:04:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:04:44 +0000   Mon, 24 Nov 2025 03:04:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    pause-530927
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                f54b67fa-e1f5-4877-8699-f78bd6737b17
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-tdqkw                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-pause-530927                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         34s
	  kube-system                 kindnet-w4w8g                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-530927             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-530927    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-csp5q                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-pause-530927             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  40s (x8 over 40s)  kubelet          Node pause-530927 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s (x8 over 40s)  kubelet          Node pause-530927 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s (x8 over 40s)  kubelet          Node pause-530927 status is now: NodeHasSufficientPID
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s                kubelet          Node pause-530927 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s                kubelet          Node pause-530927 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s                kubelet          Node pause-530927 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node pause-530927 event: Registered Node pause-530927 in Controller
	  Normal  NodeReady                17s                kubelet          Node pause-530927 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 36 a4 5e 1f c0 90 08 06
	[  +0.000454] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 86 ca fc 5f 92 50 08 06
	[Nov24 02:26] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.010203] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023904] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023880] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023866] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000025] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +1.023882] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +2.047771] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000019] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[Nov24 02:27] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[  +8.191144] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[ +16.382391] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	[ +32.252621] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: ce 32 06 19 cf de 1a ea b3 3d 3b cc 08 00
	
	
	==> etcd [db979218070c8912540c318fd1e65becb327b18a39073bb2f0cb3c7e22ec95cb] <==
	{"level":"warn","ts":"2025-11-24T03:04:21.071931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.088823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.098240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.108108Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.121000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.128682Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.139800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.149052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.169193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.173745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.183138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.193934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:21.288231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:04:28.657868Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"158.89196ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/namespace-controller\" limit:1 ","response":"range_response_count:1 size:205"}
	{"level":"info","ts":"2025-11-24T03:04:28.657993Z","caller":"traceutil/trace.go:172","msg":"trace[1170804069] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/namespace-controller; range_end:; response_count:1; response_revision:327; }","duration":"159.048387ms","start":"2025-11-24T03:04:28.498923Z","end":"2025-11-24T03:04:28.657972Z","steps":["trace[1170804069] 'range keys from in-memory index tree'  (duration: 158.751597ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:04:28.999398Z","caller":"traceutil/trace.go:172","msg":"trace[545788078] transaction","detail":"{read_only:false; response_revision:329; number_of_response:1; }","duration":"106.586137ms","start":"2025-11-24T03:04:28.892793Z","end":"2025-11-24T03:04:28.999379Z","steps":["trace[545788078] 'process raft request'  (duration: 106.457886ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:04:29.190356Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.980999ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T03:04:29.190530Z","caller":"traceutil/trace.go:172","msg":"trace[1800801860] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices; range_end:; response_count:0; response_revision:330; }","duration":"102.43643ms","start":"2025-11-24T03:04:29.088054Z","end":"2025-11-24T03:04:29.190491Z","steps":["trace[1800801860] 'range keys from in-memory index tree'  (duration: 88.235794ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:04:29.466863Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"123.468716ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" limit:1 ","response":"range_response_count:1 size:197"}
	{"level":"info","ts":"2025-11-24T03:04:29.466947Z","caller":"traceutil/trace.go:172","msg":"trace[829305916] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:332; }","duration":"123.554918ms","start":"2025-11-24T03:04:29.343376Z","end":"2025-11-24T03:04:29.466931Z","steps":["trace[829305916] 'range keys from in-memory index tree'  (duration: 123.352872ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:04:29.466953Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"174.858012ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" limit:1 ","response":"range_response_count:1 size:209"}
	{"level":"info","ts":"2025-11-24T03:04:29.466986Z","caller":"traceutil/trace.go:172","msg":"trace[512241956] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/root-ca-cert-publisher; range_end:; response_count:1; response_revision:332; }","duration":"174.893587ms","start":"2025-11-24T03:04:29.292084Z","end":"2025-11-24T03:04:29.466978Z","steps":["trace[512241956] 'range keys from in-memory index tree'  (duration: 174.618357ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:04:29.466860Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"177.612186ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-node-lease/pause-530927\" limit:1 ","response":"range_response_count:1 size:552"}
	{"level":"info","ts":"2025-11-24T03:04:29.467056Z","caller":"traceutil/trace.go:172","msg":"trace[1421433851] range","detail":"{range_begin:/registry/leases/kube-node-lease/pause-530927; range_end:; response_count:1; response_revision:332; }","duration":"177.816084ms","start":"2025-11-24T03:04:29.289221Z","end":"2025-11-24T03:04:29.467037Z","steps":["trace[1421433851] 'range keys from in-memory index tree'  (duration: 177.509181ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:04:29.813302Z","caller":"traceutil/trace.go:172","msg":"trace[1764706165] transaction","detail":"{read_only:false; response_revision:333; number_of_response:1; }","duration":"114.582346ms","start":"2025-11-24T03:04:29.698703Z","end":"2025-11-24T03:04:29.813285Z","steps":["trace[1764706165] 'process raft request'  (duration: 114.439657ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:04:58 up  1:47,  0 user,  load average: 4.20, 1.94, 1.37
	Linux pause-530927 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [13c884591858d5f3dd598f14e7d0e5092f90334981841d73ce5c724262401f23] <==
	I1124 03:04:30.899385       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:04:30.899838       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 03:04:30.900070       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:04:30.900323       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:04:30.900378       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:04:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:04:31.105841       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:04:31.106052       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:04:31.106101       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:04:31.107347       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:04:31.504440       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:04:31.504474       1 metrics.go:72] Registering metrics
	I1124 03:04:31.504590       1 controller.go:711] "Syncing nftables rules"
	I1124 03:04:41.108980       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:04:41.109053       1 main.go:301] handling current node
	I1124 03:04:51.111036       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:04:51.111090       1 main.go:301] handling current node
	
	
	==> kube-apiserver [44717dfd0dd19f67f9d5242ec3bdfd8f3ef090f8003cf2ea62ea03eadec418a2] <==
	I1124 03:04:22.256536       1 policy_source.go:240] refreshing policies
	I1124 03:04:22.258133       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:04:22.340604       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:04:22.340694       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:04:22.345311       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:04:22.345474       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:04:22.446225       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:04:23.109216       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:04:23.113485       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:04:23.113505       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:04:23.603180       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:04:23.636247       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:04:23.714031       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:04:23.718924       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 03:04:23.719858       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:04:23.723931       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:04:24.145595       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:04:24.599707       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:04:24.606813       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:04:24.614125       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:04:29.698181       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:04:30.056685       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:04:30.066540       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:04:30.102467       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:04:30.102467       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [6c0c28404a4898860dddc52df5afabd341a06204e739de342cedcd93491a2e47] <==
	I1124 03:04:29.148353       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 03:04:29.148359       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:04:29.148604       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 03:04:29.150525       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:04:29.153766       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 03:04:29.158987       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 03:04:29.159088       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:04:29.164344       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 03:04:29.169599       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 03:04:29.171953       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 03:04:29.173139       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 03:04:29.173207       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 03:04:29.173229       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 03:04:29.175432       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 03:04:29.180678       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 03:04:29.203707       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-530927" podCIDRs=["10.244.0.0/24"]
	I1124 03:04:29.208824       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:04:29.245122       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:04:29.249139       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:04:29.264453       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:04:29.287640       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:04:29.287657       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:04:29.287664       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:04:29.287687       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:04:44.103694       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2a467466a32f30878109a8f26060e1d828a3a95ad82d66388beab1f18922dc66] <==
	I1124 03:04:30.692858       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:04:30.764647       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:04:30.865476       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:04:30.865538       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 03:04:30.865662       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:04:30.894348       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:04:30.894410       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:04:30.901639       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:04:30.902123       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:04:30.903304       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:04:30.904715       1 config.go:200] "Starting service config controller"
	I1124 03:04:30.904733       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:04:30.904787       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:04:30.904799       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:04:30.904818       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:04:30.904823       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:04:30.905168       1 config.go:309] "Starting node config controller"
	I1124 03:04:30.905225       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:04:31.005458       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:04:31.005434       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:04:31.005554       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:04:31.005585       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [21c0f7b1624aac3029a7217aacc15d8049cdbf93ea484ccfbaa5ca8134d8c67b] <==
	E1124 03:04:22.229028       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:04:22.229093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:04:22.229154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:04:22.229240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:04:22.229249       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:04:22.229436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:04:22.229439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:04:22.229750       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:04:22.228701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:04:22.229926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:04:22.230322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:04:22.230803       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:04:22.230762       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:04:22.231601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:04:22.231854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:04:23.080773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:04:23.104183       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:04:23.216072       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 03:04:23.216656       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:04:23.274118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:04:23.290541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:04:23.307439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:04:23.337982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:04:23.351244       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1124 03:04:25.622134       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:04:41 pause-530927 kubelet[1306]: I1124 03:04:41.333399    1306 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:04:41 pause-530927 kubelet[1306]: I1124 03:04:41.452742    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/90051f5e-289d-4499-8b52-d8bc68631512-config-volume\") pod \"coredns-66bc5c9577-tdqkw\" (UID: \"90051f5e-289d-4499-8b52-d8bc68631512\") " pod="kube-system/coredns-66bc5c9577-tdqkw"
	Nov 24 03:04:41 pause-530927 kubelet[1306]: I1124 03:04:41.452784    1306 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpp2n\" (UniqueName: \"kubernetes.io/projected/90051f5e-289d-4499-8b52-d8bc68631512-kube-api-access-kpp2n\") pod \"coredns-66bc5c9577-tdqkw\" (UID: \"90051f5e-289d-4499-8b52-d8bc68631512\") " pod="kube-system/coredns-66bc5c9577-tdqkw"
	Nov 24 03:04:42 pause-530927 kubelet[1306]: I1124 03:04:42.551312    1306 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-tdqkw" podStartSLOduration=12.551290374 podStartE2EDuration="12.551290374s" podCreationTimestamp="2025-11-24 03:04:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:04:42.551165262 +0000 UTC m=+18.203605091" watchObservedRunningTime="2025-11-24 03:04:42.551290374 +0000 UTC m=+18.203730180"
	Nov 24 03:04:48 pause-530927 kubelet[1306]: W1124 03:04:48.457714    1306 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 03:04:48 pause-530927 kubelet[1306]: E1124 03:04:48.457807    1306 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 24 03:04:48 pause-530927 kubelet[1306]: E1124 03:04:48.457932    1306 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:48 pause-530927 kubelet[1306]: E1124 03:04:48.457959    1306 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:48 pause-530927 kubelet[1306]: E1124 03:04:48.457978    1306 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:48 pause-530927 kubelet[1306]: E1124 03:04:48.549331    1306 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 24 03:04:48 pause-530927 kubelet[1306]: E1124 03:04:48.549398    1306 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:48 pause-530927 kubelet[1306]: E1124 03:04:48.549421    1306 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:48 pause-530927 kubelet[1306]: W1124 03:04:48.558578    1306 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 03:04:48 pause-530927 kubelet[1306]: W1124 03:04:48.688799    1306 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 03:04:48 pause-530927 kubelet[1306]: W1124 03:04:48.919481    1306 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 03:04:49 pause-530927 kubelet[1306]: W1124 03:04:49.278203    1306 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 03:04:49 pause-530927 kubelet[1306]: E1124 03:04:49.473604    1306 log.go:32] "Status from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:49 pause-530927 kubelet[1306]: E1124 03:04:49.473657    1306 kubelet.go:2996] "Container runtime sanity check failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:49 pause-530927 kubelet[1306]: E1124 03:04:49.550338    1306 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 24 03:04:49 pause-530927 kubelet[1306]: E1124 03:04:49.550390    1306 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:49 pause-530927 kubelet[1306]: E1124 03:04:49.550405    1306 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 03:04:53 pause-530927 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 03:04:53 pause-530927 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 03:04:53 pause-530927 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 03:04:53 pause-530927 systemd[1]: kubelet.service: Consumed 1.142s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-530927 -n pause-530927
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-530927 -n pause-530927: exit status 2 (328.963134ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-530927 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-438041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-438041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (249.84376ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:11:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-438041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-438041
helpers_test.go:243: (dbg) docker inspect newest-cni-438041:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64",
	        "Created": "2025-11-24T03:11:03.961758173Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 641372,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:11:04.122326756Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64/hostname",
	        "HostsPath": "/var/lib/docker/containers/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64/hosts",
	        "LogPath": "/var/lib/docker/containers/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64-json.log",
	        "Name": "/newest-cni-438041",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-438041:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-438041",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64",
	                "LowerDir": "/var/lib/docker/overlay2/0710d621fdc7311911f475a554b8f76b82b86a9db0b0e85e3045f0e2074e3cc1-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0710d621fdc7311911f475a554b8f76b82b86a9db0b0e85e3045f0e2074e3cc1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0710d621fdc7311911f475a554b8f76b82b86a9db0b0e85e3045f0e2074e3cc1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0710d621fdc7311911f475a554b8f76b82b86a9db0b0e85e3045f0e2074e3cc1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-438041",
	                "Source": "/var/lib/docker/volumes/newest-cni-438041/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-438041",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-438041",
	                "name.minikube.sigs.k8s.io": "newest-cni-438041",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "d517e7dfd1a5ee76f375ee3bac7ad14b3cff1635f8c32a04397a846bfcef5603",
	            "SandboxKey": "/var/run/docker/netns/d517e7dfd1a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-438041": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b30d540ef88b055a6ad3cc188fd27395739f217150ea48ac734e123a015ff9c1",
	                    "EndpointID": "66a182fd076daf0eb025a4a6afcf13a71138e89f6d2a5f584b38f16475a39d1a",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "ae:91:7b:bf:49:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-438041",
	                        "7dcb0e0e285e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438041 -n newest-cni-438041
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-438041 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-438041 logs -n 25: (1.007254602s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-965704 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-965704                │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                       │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ delete  │ -p kubernetes-upgrade-034173                                                                                                                                                                                                                  │ kubernetes-upgrade-034173    │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                       │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                        │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo systemctl cat docker --no-pager                                                                                                                                                                                        │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /etc/docker/daemon.json                                                                                                                                                                                            │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo docker system info                                                                                                                                                                                                     │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ start   │ -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                               │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                         │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cri-dockerd --version                                                                                                                                                                                                  │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo systemctl cat containerd --no-pager                                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                             │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /etc/containerd/config.toml                                                                                                                                                                                        │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo containerd config dump                                                                                                                                                                                                 │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo crio config                                                                                                                                                                                                            │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ delete  │ -p flannel-965704                                                                                                                                                                                                                             │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:11 UTC │
	│ addons  │ enable metrics-server -p newest-cni-438041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:10:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:10:57.127829  639611 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:10:57.127990  639611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:10:57.128000  639611 out.go:374] Setting ErrFile to fd 2...
	I1124 03:10:57.128004  639611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:10:57.128242  639611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:10:57.128839  639611 out.go:368] Setting JSON to false
	I1124 03:10:57.129993  639611 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6804,"bootTime":1763947053,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:10:57.130043  639611 start.go:143] virtualization: kvm guest
	I1124 03:10:57.131842  639611 out.go:179] * [newest-cni-438041] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:10:57.133006  639611 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:10:57.133003  639611 notify.go:221] Checking for updates...
	I1124 03:10:57.135165  639611 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:10:57.136402  639611 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:10:57.137671  639611 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:10:57.138741  639611 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:10:57.139904  639611 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:10:57.141390  639611 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:10:57.141496  639611 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:10:57.141578  639611 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:10:57.141703  639611 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:10:57.166641  639611 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:10:57.166738  639611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:10:57.221961  639611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-24 03:10:57.211378242 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:10:57.222054  639611 docker.go:319] overlay module found
	I1124 03:10:57.223745  639611 out.go:179] * Using the docker driver based on user configuration
	I1124 03:10:57.224957  639611 start.go:309] selected driver: docker
	I1124 03:10:57.224977  639611 start.go:927] validating driver "docker" against <nil>
	I1124 03:10:57.224994  639611 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:10:57.225758  639611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:10:57.290865  639611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-24 03:10:57.279924959 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:10:57.291115  639611 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1124 03:10:57.291161  639611 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1124 03:10:57.291452  639611 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 03:10:57.293881  639611 out.go:179] * Using Docker driver with root privileges
	I1124 03:10:57.295058  639611 cni.go:84] Creating CNI manager for ""
	I1124 03:10:57.295146  639611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:10:57.295161  639611 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:10:57.295265  639611 start.go:353] cluster config:
	{Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:10:57.296817  639611 out.go:179] * Starting "newest-cni-438041" primary control-plane node in "newest-cni-438041" cluster
	I1124 03:10:57.297866  639611 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:10:57.299907  639611 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:10:57.301070  639611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:10:57.301103  639611 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:10:57.301112  639611 cache.go:65] Caching tarball of preloaded images
	I1124 03:10:57.301177  639611 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:10:57.301210  639611 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:10:57.301222  639611 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:10:57.301343  639611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json ...
	I1124 03:10:57.301366  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json: {Name:mk1bf53574cdc9152c6531d50672e7a950b9d2f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:10:57.325407  639611 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:10:57.325433  639611 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:10:57.325454  639611 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:10:57.325494  639611 start.go:360] acquireMachinesLock for newest-cni-438041: {Name:mk895e89056f5ce7564002ba75457dcfde41ce4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:10:57.325596  639611 start.go:364] duration metric: took 82.202µs to acquireMachinesLock for "newest-cni-438041"
	I1124 03:10:57.325624  639611 start.go:93] Provisioning new machine with config: &{Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:10:57.325724  639611 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:10:55.541109  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (3.244075519s)
	I1124 03:10:55.541150  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1124 03:10:55.541172  631782 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 03:10:55.541227  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 03:10:56.794831  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.25357343s)
	I1124 03:10:56.794863  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 03:10:56.794908  631782 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 03:10:56.794989  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 03:10:55.833612  636397 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993813:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (5.620337954s)
	I1124 03:10:55.833645  636397 kic.go:203] duration metric: took 5.620509753s to extract preloaded images to volume ...
	W1124 03:10:55.833730  636397 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:10:55.833774  636397 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:10:55.833824  636397 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:10:55.899529  636397 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-993813 --name default-k8s-diff-port-993813 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993813 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-993813 --network default-k8s-diff-port-993813 --ip 192.168.76.2 --volume default-k8s-diff-port-993813:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:10:56.489655  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Running}}
	I1124 03:10:56.513036  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:10:56.535229  636397 cli_runner.go:164] Run: docker exec default-k8s-diff-port-993813 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:10:56.595848  636397 oci.go:144] the created container "default-k8s-diff-port-993813" has a running status.
	I1124 03:10:56.595922  636397 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa...
	I1124 03:10:56.701587  636397 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:10:56.875193  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:10:56.894915  636397 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:10:56.894937  636397 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-993813 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:10:56.946242  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:10:56.964911  636397 machine.go:94] provisionDockerMachine start ...
	I1124 03:10:56.965003  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:10:56.983380  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:10:56.983615  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:10:56.983627  636397 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:10:56.984346  636397 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37014->127.0.0.1:33468: read: connection reset by peer
	I1124 03:10:57.234863  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:57.734595  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:58.234694  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:58.734330  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:59.234707  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:59.735106  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:00.234710  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:00.735086  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:01.235238  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:01.735122  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:57.328166  639611 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:10:57.328471  639611 start.go:159] libmachine.API.Create for "newest-cni-438041" (driver="docker")
	I1124 03:10:57.328503  639611 client.go:173] LocalClient.Create starting
	I1124 03:10:57.328585  639611 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 03:10:57.328619  639611 main.go:143] libmachine: Decoding PEM data...
	I1124 03:10:57.328645  639611 main.go:143] libmachine: Parsing certificate...
	I1124 03:10:57.328730  639611 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 03:10:57.328758  639611 main.go:143] libmachine: Decoding PEM data...
	I1124 03:10:57.328776  639611 main.go:143] libmachine: Parsing certificate...
	I1124 03:10:57.329238  639611 cli_runner.go:164] Run: docker network inspect newest-cni-438041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:10:57.347161  639611 cli_runner.go:211] docker network inspect newest-cni-438041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:10:57.347240  639611 network_create.go:284] running [docker network inspect newest-cni-438041] to gather additional debugging logs...
	I1124 03:10:57.347259  639611 cli_runner.go:164] Run: docker network inspect newest-cni-438041
	W1124 03:10:57.366750  639611 cli_runner.go:211] docker network inspect newest-cni-438041 returned with exit code 1
	I1124 03:10:57.366777  639611 network_create.go:287] error running [docker network inspect newest-cni-438041]: docker network inspect newest-cni-438041: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-438041 not found
	I1124 03:10:57.366807  639611 network_create.go:289] output of [docker network inspect newest-cni-438041]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-438041 not found
	
	** /stderr **
	I1124 03:10:57.366976  639611 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:10:57.385293  639611 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-58688175ab6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:9d:6a:fc:5c:13} reservation:<nil>}
	I1124 03:10:57.386152  639611 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf033568456f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:35:42:16:23:28} reservation:<nil>}
	I1124 03:10:57.387409  639611 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ecb12099844 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ce:07:19:c6:91:6e} reservation:<nil>}
	I1124 03:10:57.388971  639611 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-50b2e4e61586 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:0d:88:19:7d:df} reservation:<nil>}
	I1124 03:10:57.389487  639611 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6fb41680caed IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:a0:6c:44:95:b2} reservation:<nil>}
	I1124 03:10:57.390236  639611 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018f44a0}
	I1124 03:10:57.390257  639611 network_create.go:124] attempt to create docker network newest-cni-438041 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 03:10:57.390305  639611 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-438041 newest-cni-438041
	I1124 03:10:57.440525  639611 network_create.go:108] docker network newest-cni-438041 192.168.94.0/24 created
	I1124 03:10:57.440568  639611 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-438041" container
	I1124 03:10:57.440642  639611 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:10:57.458704  639611 cli_runner.go:164] Run: docker volume create newest-cni-438041 --label name.minikube.sigs.k8s.io=newest-cni-438041 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:10:57.476351  639611 oci.go:103] Successfully created a docker volume newest-cni-438041
	I1124 03:10:57.476450  639611 cli_runner.go:164] Run: docker run --rm --name newest-cni-438041-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-438041 --entrypoint /usr/bin/test -v newest-cni-438041:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:10:58.353729  639611 oci.go:107] Successfully prepared a docker volume newest-cni-438041
	I1124 03:10:58.353794  639611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:10:58.353806  639611 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:10:58.353903  639611 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-438041:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:10:58.184837  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.389817981s)
	I1124 03:10:58.184869  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1124 03:10:58.184909  631782 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:10:58.184953  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:11:00.135230  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:11:00.135263  636397 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-993813"
	I1124 03:11:00.135337  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.156666  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:00.157040  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:11:00.157061  636397 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993813 && echo "default-k8s-diff-port-993813" | sudo tee /etc/hostname
	I1124 03:11:00.317337  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:11:00.317424  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.338575  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:00.338824  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:11:00.338843  636397 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993813/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:11:00.487669  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:11:00.487698  636397 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:11:00.487736  636397 ubuntu.go:190] setting up certificates
	I1124 03:11:00.487751  636397 provision.go:84] configureAuth start
	I1124 03:11:00.487815  636397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:11:00.511564  636397 provision.go:143] copyHostCerts
	I1124 03:11:00.511630  636397 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:11:00.511666  636397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:11:00.511735  636397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:11:00.514009  636397 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:11:00.514030  636397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:11:00.514075  636397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:11:00.514159  636397 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:11:00.514167  636397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:11:00.514200  636397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:11:00.514270  636397 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993813 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-993813 localhost minikube]
	I1124 03:11:00.658058  636397 provision.go:177] copyRemoteCerts
	I1124 03:11:00.658133  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:11:00.658198  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.678015  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:00.787811  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:11:00.908237  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 03:11:00.926667  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:11:00.945146  636397 provision.go:87] duration metric: took 457.380171ms to configureAuth
	I1124 03:11:00.945175  636397 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:11:00.945368  636397 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:00.945497  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.963523  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:00.963843  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:11:00.963867  636397 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:11:01.528016  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:11:01.528042  636397 machine.go:97] duration metric: took 4.563106275s to provisionDockerMachine
	I1124 03:11:01.528055  636397 client.go:176] duration metric: took 12.433514854s to LocalClient.Create
	I1124 03:11:01.528076  636397 start.go:167] duration metric: took 12.433610792s to libmachine.API.Create "default-k8s-diff-port-993813"
	I1124 03:11:01.528087  636397 start.go:293] postStartSetup for "default-k8s-diff-port-993813" (driver="docker")
	I1124 03:11:01.528107  636397 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:11:01.528192  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:11:01.528250  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:01.550426  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:01.725783  636397 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:11:01.731121  636397 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:11:01.731156  636397 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:11:01.731171  636397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:11:01.731245  636397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:11:01.731344  636397 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:11:01.731461  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:11:01.741273  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:02.020513  636397 start.go:296] duration metric: took 492.40359ms for postStartSetup
	I1124 03:11:02.119944  636397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:11:02.137546  636397 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/config.json ...
	I1124 03:11:02.185355  636397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:11:02.185405  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:02.201426  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:02.297393  636397 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:11:02.302398  636397 start.go:128] duration metric: took 13.210072434s to createHost
	I1124 03:11:02.302422  636397 start.go:83] releasing machines lock for "default-k8s-diff-port-993813", held for 13.210223546s
	I1124 03:11:02.302502  636397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:11:02.319872  636397 ssh_runner.go:195] Run: cat /version.json
	I1124 03:11:02.319913  636397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:11:02.319948  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:02.319995  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:02.340353  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:02.340353  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:02.486835  636397 ssh_runner.go:195] Run: systemctl --version
	I1124 03:11:02.493433  636397 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:11:02.533294  636397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:11:02.538557  636397 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:11:02.538616  636397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:11:02.908750  636397 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:11:02.908778  636397 start.go:496] detecting cgroup driver to use...
	I1124 03:11:02.908812  636397 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:11:02.908861  636397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:11:02.925941  636397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:11:02.941046  636397 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:11:02.941102  636397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:11:02.959121  636397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:11:02.975801  636397 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:11:03.054110  636397 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:11:03.174491  636397 docker.go:234] disabling docker service ...
	I1124 03:11:03.174560  636397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:11:03.193664  636397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:11:03.207203  636397 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:11:03.340321  636397 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:11:03.515878  636397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:11:03.529161  636397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:11:03.543103  636397 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:11:03.543166  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.604968  636397 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:11:03.605035  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.624611  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.645648  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.689119  636397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:11:03.698440  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.783084  636397 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:02.234544  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:02.735113  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:03.234728  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:03.735125  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:03.823251  623347 kubeadm.go:1114] duration metric: took 11.180431183s to wait for elevateKubeSystemPrivileges
	I1124 03:11:03.823284  623347 kubeadm.go:403] duration metric: took 22.234422884s to StartCluster
	I1124 03:11:03.823307  623347 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:03.823374  623347 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:03.824432  623347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:03.824684  623347 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:03.824740  623347 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:03.824845  623347 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-579951"
	I1124 03:11:03.824727  623347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:03.824906  623347 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-579951"
	I1124 03:11:03.824917  623347 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:11:03.824923  623347 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-579951"
	I1124 03:11:03.824900  623347 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-579951"
	I1124 03:11:03.825024  623347 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:11:03.825377  623347 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:03.825590  623347 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:03.826953  623347 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:03.828395  623347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:03.862253  623347 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-579951"
	I1124 03:11:03.862302  623347 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:11:03.862810  623347 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:03.864365  623347 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:03.807318  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.820946  636397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:11:03.839099  636397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:11:03.853603  636397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:04.008696  636397 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:11:04.280958  636397 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:11:04.281140  636397 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:11:04.287138  636397 start.go:564] Will wait 60s for crictl version
	I1124 03:11:04.287195  636397 ssh_runner.go:195] Run: which crictl
	I1124 03:11:04.296400  636397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:11:04.343627  636397 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:11:04.343993  636397 ssh_runner.go:195] Run: crio --version
	I1124 03:11:04.389849  636397 ssh_runner.go:195] Run: crio --version
	I1124 03:11:04.426944  636397 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:11:03.866933  623347 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:03.866992  623347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:03.867050  623347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:03.908181  623347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:03.911219  623347 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:03.911443  623347 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:03.911619  623347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:03.949048  623347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:03.966864  623347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:04.039230  623347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:04.056821  623347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:04.079844  623347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:04.252855  623347 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:04.253835  623347 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-579951" to be "Ready" ...
	I1124 03:11:04.604404  623347 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:04.605457  623347 addons.go:530] duration metric: took 780.71049ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:04.763969  623347 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-579951" context rescaled to 1 replicas
	W1124 03:11:06.257869  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	I1124 03:11:03.812979  639611 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-438041:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (5.459016714s)
	I1124 03:11:03.813017  639611 kic.go:203] duration metric: took 5.459207202s to extract preloaded images to volume ...
	W1124 03:11:03.813173  639611 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:11:03.813255  639611 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:11:03.813304  639611 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:11:03.930433  639611 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-438041 --name newest-cni-438041 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-438041 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-438041 --network newest-cni-438041 --ip 192.168.94.2 --volume newest-cni-438041:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:11:04.484106  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Running}}
	I1124 03:11:04.506492  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:04.527784  639611 cli_runner.go:164] Run: docker exec newest-cni-438041 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:11:04.586541  639611 oci.go:144] the created container "newest-cni-438041" has a running status.
	I1124 03:11:04.586577  639611 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa...
	I1124 03:11:04.720361  639611 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:11:04.758530  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:04.794751  639611 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:11:04.794778  639611 kic_runner.go:114] Args: [docker exec --privileged newest-cni-438041 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:11:04.848966  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:04.868444  639611 machine.go:94] provisionDockerMachine start ...
	I1124 03:11:04.868542  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:04.886704  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:04.887098  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:04.887115  639611 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:11:04.887825  639611 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60056->127.0.0.1:33473: read: connection reset by peer
	I1124 03:11:03.698009  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.513031284s)
	I1124 03:11:03.698036  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 03:11:03.698072  631782 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:11:03.698135  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:11:04.540749  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 03:11:04.540878  631782 cache_images.go:125] Successfully loaded all cached images
	I1124 03:11:04.540962  631782 cache_images.go:94] duration metric: took 16.632965714s to LoadCachedImages
	I1124 03:11:04.540998  631782 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 03:11:04.541478  631782 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-603010 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:04.541629  631782 ssh_runner.go:195] Run: crio config
	I1124 03:11:04.613074  631782 cni.go:84] Creating CNI manager for ""
	I1124 03:11:04.613101  631782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:04.613135  631782 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:11:04.613165  631782 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603010 NodeName:no-preload-603010 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:04.613332  631782 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603010"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:04.613410  631782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:04.624805  631782 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 03:11:04.624880  631782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:04.636504  631782 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1124 03:11:04.636570  631782 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1124 03:11:04.636598  631782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 03:11:04.637106  631782 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1124 03:11:04.641001  631782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 03:11:04.641031  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1124 03:11:05.924351  631782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:11:05.942273  631782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 03:11:05.947268  631782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 03:11:05.947299  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1124 03:11:06.319700  631782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 03:11:06.328312  631782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 03:11:06.328362  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1124 03:11:06.576699  631782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:06.584640  631782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:11:06.596881  631782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:06.706372  631782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1124 03:11:06.725651  631782 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:06.731312  631782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:06.856376  631782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:06.964324  631782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:06.983343  631782 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010 for IP: 192.168.85.2
	I1124 03:11:06.983368  631782 certs.go:195] generating shared ca certs ...
	I1124 03:11:06.983389  631782 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:06.983554  631782 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:06.983623  631782 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:06.983638  631782 certs.go:257] generating profile certs ...
	I1124 03:11:06.983713  631782 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key
	I1124 03:11:06.983731  631782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.crt with IP's: []
	I1124 03:11:07.236879  631782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.crt ...
	I1124 03:11:07.236911  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.crt: {Name:mk2d55635da2a9326437d41d4577da0fe14409fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.237058  631782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key ...
	I1124 03:11:07.237070  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key: {Name:mkaa577d5c9ee92828884715bd0dda9017fc9779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.237153  631782 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738
	I1124 03:11:07.237166  631782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 03:11:07.327953  631782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738 ...
	I1124 03:11:07.327981  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738: {Name:mk8a9cae6d8e3a4cc6d6140e38080bb869e23acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.328138  631782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738 ...
	I1124 03:11:07.328156  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738: {Name:mkbf13b81ddaf24f4938052522adb9836ef8e1d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.328261  631782 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt
	I1124 03:11:07.328354  631782 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key
	I1124 03:11:07.328436  631782 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key
	I1124 03:11:07.328458  631782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt with IP's: []
	I1124 03:11:07.358779  631782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt ...
	I1124 03:11:07.358798  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt: {Name:mk394a0184e993e66f37c39d12264673ee1326c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.358929  631782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key ...
	I1124 03:11:07.358944  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key: {Name:mkf0922c5b9c127348bd0d94fa6adc983ccc147a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.359146  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:07.359197  631782 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:07.359210  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:07.359245  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:07.359288  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:07.359324  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:07.359391  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:07.360046  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:07.377802  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:07.394719  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:07.411226  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:07.427651  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:11:07.443818  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:11:07.461178  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:07.477210  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:11:07.493639  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:07.511874  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:07.528421  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:07.544763  631782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:07.557346  631782 ssh_runner.go:195] Run: openssl version
	I1124 03:11:07.563499  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:07.571402  631782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:07.574952  631782 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:07.575004  631782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:07.608612  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:07.616619  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:07.624657  631782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:07.628272  631782 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:07.628318  631782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:07.662522  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:07.670558  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:07.678360  631782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:07.681796  631782 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:07.681850  631782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:07.715936  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:07.723734  631782 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:07.727008  631782 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:11:07.727066  631782 kubeadm.go:401] StartCluster: {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:07.727159  631782 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:07.727200  631782 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:07.757836  631782 cri.go:89] found id: ""
	I1124 03:11:07.757930  631782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:07.767026  631782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:11:07.775281  631782 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:11:07.775329  631782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:11:07.782944  631782 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:11:07.782960  631782 kubeadm.go:158] found existing configuration files:
	
	I1124 03:11:07.782996  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:11:07.790173  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:11:07.790211  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:11:07.797407  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:11:07.804469  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:11:07.804513  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:11:07.811339  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:11:07.818449  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:11:07.818485  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:11:07.825301  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:11:07.832368  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:11:07.832409  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:11:07.839105  631782 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:11:07.875134  631782 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:11:07.875186  631782 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:11:07.899771  631782 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:11:07.899860  631782 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:11:07.899936  631782 kubeadm.go:319] OS: Linux
	I1124 03:11:07.900023  631782 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:11:07.900109  631782 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:11:07.900181  631782 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:11:07.900246  631782 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:11:07.900310  631782 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:11:07.900374  631782 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:11:07.900436  631782 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:11:07.900489  631782 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:11:07.966533  631782 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:11:07.966689  631782 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:11:07.966849  631782 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:11:07.981358  631782 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:11:04.428062  636397 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:11:04.452862  636397 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:11:04.458281  636397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:04.471103  636397 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:11:04.471281  636397 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:11:04.471346  636397 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:04.523060  636397 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:04.523089  636397 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:11:04.523147  636397 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:04.562653  636397 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:04.562684  636397 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:11:04.562695  636397 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 03:11:04.562806  636397 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:04.562939  636397 ssh_runner.go:195] Run: crio config
	I1124 03:11:04.638357  636397 cni.go:84] Creating CNI manager for ""
	I1124 03:11:04.638382  636397 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:04.638402  636397 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:11:04.638430  636397 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993813 NodeName:default-k8s-diff-port-993813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:04.638602  636397 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993813"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:04.638670  636397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:04.649639  636397 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:11:04.649707  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:04.665638  636397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 03:11:04.685753  636397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:04.706728  636397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 03:11:04.727449  636397 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:04.732474  636397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:04.750204  636397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:04.878850  636397 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:04.905254  636397 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813 for IP: 192.168.76.2
	I1124 03:11:04.905269  636397 certs.go:195] generating shared ca certs ...
	I1124 03:11:04.905285  636397 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:04.905416  636397 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:04.905456  636397 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:04.905465  636397 certs.go:257] generating profile certs ...
	I1124 03:11:04.905521  636397 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key
	I1124 03:11:04.905533  636397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.crt with IP's: []
	I1124 03:11:05.049206  636397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.crt ...
	I1124 03:11:05.049242  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.crt: {Name:mk818bd7c5f4a63b56241a5f5b815a5c96f8af6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.049427  636397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key ...
	I1124 03:11:05.049453  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key: {Name:mkb83de72d7be9aac5a3b6d7ffec3016949857c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.049582  636397 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619
	I1124 03:11:05.049600  636397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 03:11:05.290005  636397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619 ...
	I1124 03:11:05.290086  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619: {Name:mkbe37296015109a5ee861e9a87e29d9440c243c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.290281  636397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619 ...
	I1124 03:11:05.290300  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619: {Name:mk596e1b3db31f58cc0b8eb40ec231f070ee1f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.290403  636397 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt
	I1124 03:11:05.290503  636397 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key
	I1124 03:11:05.290584  636397 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key
	I1124 03:11:05.290607  636397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt with IP's: []
	I1124 03:11:05.405376  636397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt ...
	I1124 03:11:05.405411  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt: {Name:mk5c1d3bc48ab0dc1254aae88b7ec32711e77a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.405578  636397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key ...
	I1124 03:11:05.405599  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key: {Name:mk42df1886b091d28840c422e5e20c0f8c4e5569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.405873  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:05.405948  636397 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:05.405959  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:05.406001  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:05.406031  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:05.406059  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:05.406113  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:05.406989  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:05.434254  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:05.460107  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:05.485830  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:05.511902  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 03:11:05.535282  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:11:05.558610  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:05.579558  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:11:05.598340  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:05.620622  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:05.644303  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:05.667291  636397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:05.681732  636397 ssh_runner.go:195] Run: openssl version
	I1124 03:11:05.689816  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:05.701038  636397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:05.705646  636397 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:05.705699  636397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:05.763638  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:05.776210  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:05.789125  636397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:05.794258  636397 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:05.794315  636397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:05.853631  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:05.886140  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:05.898078  636397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:05.902187  636397 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:05.902252  636397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:06.009788  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:06.034772  636397 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:06.040075  636397 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:11:06.040136  636397 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:06.040285  636397 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:06.040340  636397 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:06.076603  636397 cri.go:89] found id: ""
	I1124 03:11:06.076664  636397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:06.084730  636397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:11:06.096161  636397 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:11:06.096213  636397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:11:06.104666  636397 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:11:06.104687  636397 kubeadm.go:158] found existing configuration files:
	
	I1124 03:11:06.104736  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1124 03:11:06.112142  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:11:06.112188  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:11:06.119278  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1124 03:11:06.126557  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:11:06.126604  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:11:06.133611  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1124 03:11:06.141319  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:11:06.141384  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:11:06.151450  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1124 03:11:06.162299  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:11:06.162489  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:11:06.173268  636397 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:11:06.365493  636397 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:11:06.445191  636397 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:11:08.034430  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-438041
	
	I1124 03:11:08.034458  639611 ubuntu.go:182] provisioning hostname "newest-cni-438041"
	I1124 03:11:08.034525  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.053306  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:08.053556  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:08.053570  639611 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-438041 && echo "newest-cni-438041" | sudo tee /etc/hostname
	I1124 03:11:08.201604  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-438041
	
	I1124 03:11:08.201678  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.220581  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:08.220950  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:08.220977  639611 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-438041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-438041/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-438041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:11:08.358818  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:11:08.358853  639611 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:11:08.358877  639611 ubuntu.go:190] setting up certificates
	I1124 03:11:08.358902  639611 provision.go:84] configureAuth start
	I1124 03:11:08.358979  639611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:08.377513  639611 provision.go:143] copyHostCerts
	I1124 03:11:08.377573  639611 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:11:08.377584  639611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:11:08.377654  639611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:11:08.377742  639611 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:11:08.377752  639611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:11:08.377785  639611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:11:08.377851  639611 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:11:08.377860  639611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:11:08.377905  639611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:11:08.378033  639611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.newest-cni-438041 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-438041]
	I1124 03:11:08.493906  639611 provision.go:177] copyRemoteCerts
	I1124 03:11:08.493995  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:11:08.494042  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.512353  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:08.611703  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:11:08.635092  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:11:08.653622  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:11:08.675705  639611 provision.go:87] duration metric: took 316.785216ms to configureAuth
	I1124 03:11:08.675736  639611 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:11:08.676005  639611 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:08.676156  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.697718  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:08.698047  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:08.698069  639611 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:11:08.991292  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:11:08.991321  639611 machine.go:97] duration metric: took 4.122852164s to provisionDockerMachine
	I1124 03:11:08.991334  639611 client.go:176] duration metric: took 11.662821141s to LocalClient.Create
	I1124 03:11:08.991367  639611 start.go:167] duration metric: took 11.662898329s to libmachine.API.Create "newest-cni-438041"
	I1124 03:11:08.991381  639611 start.go:293] postStartSetup for "newest-cni-438041" (driver="docker")
	I1124 03:11:08.991395  639611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:11:08.991454  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:11:08.991515  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.009958  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.110159  639611 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:11:09.113555  639611 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:11:09.113584  639611 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:11:09.113597  639611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:11:09.113650  639611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:11:09.113762  639611 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:11:09.113944  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:11:09.121410  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:09.140617  639611 start.go:296] duration metric: took 149.222262ms for postStartSetup
	I1124 03:11:09.141052  639611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:09.158606  639611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json ...
	I1124 03:11:09.158846  639611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:11:09.158906  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.176052  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.271931  639611 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:11:09.276348  639611 start.go:128] duration metric: took 11.950609978s to createHost
	I1124 03:11:09.276376  639611 start.go:83] releasing machines lock for "newest-cni-438041", held for 11.950766604s
	I1124 03:11:09.276440  639611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:09.294908  639611 ssh_runner.go:195] Run: cat /version.json
	I1124 03:11:09.294952  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.294957  639611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:11:09.295031  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.313079  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.314881  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.408772  639611 ssh_runner.go:195] Run: systemctl --version
	I1124 03:11:09.469031  639611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:11:09.504409  639611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:11:09.508820  639611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:11:09.508877  639611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:11:09.533917  639611 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:11:09.533945  639611 start.go:496] detecting cgroup driver to use...
	I1124 03:11:09.533978  639611 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:11:09.534024  639611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:11:09.550223  639611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:11:09.561378  639611 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:11:09.561431  639611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:11:09.576700  639611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:11:09.592718  639611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:11:09.686327  639611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:11:09.778323  639611 docker.go:234] disabling docker service ...
	I1124 03:11:09.778388  639611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:11:09.797725  639611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:11:09.809981  639611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:11:09.897574  639611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:11:09.981763  639611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:11:09.993604  639611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:11:10.008039  639611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:11:10.008088  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.017807  639611 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:11:10.017915  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.026036  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.034318  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.042375  639611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:11:10.050115  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.058198  639611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.071036  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.079079  639611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:11:10.085901  639611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:11:10.092631  639611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:10.187290  639611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:11:10.321446  639611 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:11:10.321516  639611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:11:10.325320  639611 start.go:564] Will wait 60s for crictl version
	I1124 03:11:10.325377  639611 ssh_runner.go:195] Run: which crictl
	I1124 03:11:10.328940  639611 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:11:10.355782  639611 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:11:10.355854  639611 ssh_runner.go:195] Run: crio --version
	I1124 03:11:10.386668  639611 ssh_runner.go:195] Run: crio --version
	I1124 03:11:10.419997  639611 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:11:10.421239  639611 cli_runner.go:164] Run: docker network inspect newest-cni-438041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:11:10.440078  639611 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:11:10.443982  639611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:10.455537  639611 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 03:11:10.456654  639611 kubeadm.go:884] updating cluster {Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:11:10.456815  639611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:11:10.456863  639611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:10.490472  639611 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:10.490492  639611 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:11:10.490540  639611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:10.519699  639611 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:10.519720  639611 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:11:10.519729  639611 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:11:10.519828  639611 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-438041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:10.519912  639611 ssh_runner.go:195] Run: crio config
	I1124 03:11:10.565191  639611 cni.go:84] Creating CNI manager for ""
	I1124 03:11:10.565215  639611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:10.565239  639611 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 03:11:10.565270  639611 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-438041 NodeName:newest-cni-438041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:10.565418  639611 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-438041"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:10.565482  639611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:10.573438  639611 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:11:10.573499  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:10.581224  639611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:11:10.593276  639611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:10.607346  639611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1124 03:11:10.619134  639611 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:10.622475  639611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:10.631680  639611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:10.724670  639611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:10.750283  639611 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041 for IP: 192.168.94.2
	I1124 03:11:10.750306  639611 certs.go:195] generating shared ca certs ...
	I1124 03:11:10.750339  639611 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:10.750511  639611 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:10.750555  639611 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:10.750565  639611 certs.go:257] generating profile certs ...
	I1124 03:11:10.750620  639611 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.key
	I1124 03:11:10.750633  639611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.crt with IP's: []
	I1124 03:11:10.920017  639611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.crt ...
	I1124 03:11:10.920047  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.crt: {Name:mkfd139af0a71cd4698b8ff5b3e638153eeb0dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:10.920228  639611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.key ...
	I1124 03:11:10.920243  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.key: {Name:mke75272685634ebc2912579601c6ca7cb4478b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:10.920357  639611 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183
	I1124 03:11:10.920374  639611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 03:11:11.156793  639611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183 ...
	I1124 03:11:11.156820  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183: {Name:mke55e2e412acbf5b903a8d8b4a7d2880f9fbe7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.157004  639611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183 ...
	I1124 03:11:11.157022  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183: {Name:mkad44470d73de35f2d3ae6d5e6d61417cfe11c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.157103  639611 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt
	I1124 03:11:11.157202  639611 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key
	I1124 03:11:11.157264  639611 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key
	I1124 03:11:11.157285  639611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt with IP's: []
	I1124 03:11:11.183331  639611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt ...
	I1124 03:11:11.183357  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt: {Name:mkaf061d70fce7922fd95db6d82ac8186d66239f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.183478  639611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key ...
	I1124 03:11:11.183490  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key: {Name:mk44940b01cb7f629207bffeb036b8a7e5d40814 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.183656  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:11.183693  639611 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:11.183702  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:11.183724  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:11.183746  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:11.183768  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:11.183810  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:11.184490  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:11.202414  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:11.218915  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:11.235233  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:11.251127  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:11:11.267814  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:11:11.284563  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:11.300790  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:11:11.316788  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:11.334413  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:11.350424  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:11.366533  639611 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:11.378365  639611 ssh_runner.go:195] Run: openssl version
	I1124 03:11:11.384126  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:11.391937  639611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:11.395429  639611 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:11.395475  639611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:11.428268  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:11.435958  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:11.443551  639611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:11.446861  639611 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:11.446917  639611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:11.480561  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:11.488521  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:11.496317  639611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:11.499903  639611 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:11.500486  639611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:11.534970  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:11.542760  639611 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:11.546025  639611 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:11:11.546084  639611 kubeadm.go:401] StartCluster: {Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:11.546189  639611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:11.546235  639611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:11.573079  639611 cri.go:89] found id: ""
	I1124 03:11:11.573143  639611 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:11.580989  639611 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:11:11.588193  639611 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:11:11.588243  639611 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:11:11.595578  639611 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:11:11.595596  639611 kubeadm.go:158] found existing configuration files:
	
	I1124 03:11:11.595632  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:11:11.602806  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:11:11.602846  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:11:11.609710  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:11:11.617281  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:11:11.617327  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:11:11.624606  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:11:11.631999  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:11:11.632041  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:11:11.640350  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:11:11.648359  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:11:11.648402  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:11:11.656826  639611 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:11:11.705613  639611 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:11:11.705684  639611 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:11:11.726192  639611 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:11:11.726285  639611 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:11:11.726340  639611 kubeadm.go:319] OS: Linux
	I1124 03:11:11.726397  639611 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:11:11.726461  639611 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:11:11.726524  639611 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:11:11.726587  639611 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:11:11.726686  639611 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:11:11.726790  639611 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:11:11.726861  639611 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:11:11.726943  639611 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:11:11.786505  639611 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:11:11.786613  639611 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:11:11.786747  639611 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:11:11.794629  639611 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1124 03:11:08.757098  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	W1124 03:11:10.757264  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	I1124 03:11:11.798699  639611 out.go:252]   - Generating certificates and keys ...
	I1124 03:11:11.798797  639611 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:11:11.798912  639611 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:11:11.963263  639611 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:11:12.107595  639611 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:11:07.983375  631782 out.go:252]   - Generating certificates and keys ...
	I1124 03:11:07.983499  631782 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:11:07.983606  631782 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:11:09.010428  631782 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:11:09.257194  631782 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:11:09.494535  631782 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:11:09.716956  631782 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:11:09.775865  631782 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:11:09.776099  631782 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-603010] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:11:10.030969  631782 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:11:10.031162  631782 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-603010] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:11:10.290289  631782 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:11:10.445776  631782 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:11:10.719700  631782 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:11:10.719788  631782 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:11:10.954056  631782 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:11:11.224490  631782 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:11:11.470938  631782 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:11:11.927378  631782 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:11:12.303932  631782 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:11:12.304513  631782 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:11:12.307975  631782 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:11:12.309284  631782 out.go:252]   - Booting up control plane ...
	I1124 03:11:12.309381  631782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:11:12.309465  631782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:11:12.310009  631782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:11:12.339837  631782 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:11:12.340003  631782 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:11:12.347388  631782 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:11:12.347620  631782 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:11:12.347698  631782 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:11:12.466844  631782 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:11:12.466970  631782 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:11:12.233009  639611 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:11:12.451335  639611 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:11:12.593355  639611 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:11:12.593574  639611 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-438041] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:11:13.275810  639611 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:11:13.276017  639611 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-438041] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:11:14.145354  639611 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:11:14.614138  639611 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:11:14.941086  639611 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:11:14.941227  639611 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:11:15.058919  639611 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:11:15.267378  639611 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:11:15.939232  639611 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:11:16.257592  639611 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:11:16.635822  639611 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:11:16.636485  639611 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:11:16.640110  639611 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1124 03:11:13.256972  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	W1124 03:11:15.259252  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	I1124 03:11:12.968700  631782 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.726277ms
	I1124 03:11:12.972359  631782 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:11:12.972498  631782 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 03:11:12.972634  631782 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:11:12.972778  631782 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:11:15.168823  631782 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.194903045s
	I1124 03:11:15.395212  631782 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.422782586s
	I1124 03:11:16.974533  631782 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002117874s
	I1124 03:11:16.990327  631782 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:11:17.001157  631782 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:11:17.009558  631782 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:11:17.009832  631782 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-603010 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:11:17.017079  631782 kubeadm.go:319] [bootstrap-token] Using token: qixyjy.v1lkfw8d9c2mcnrf
	I1124 03:11:16.641561  639611 out.go:252]   - Booting up control plane ...
	I1124 03:11:16.641675  639611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:11:16.641789  639611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:11:16.642679  639611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:11:16.660968  639611 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:11:16.661101  639611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:11:16.668686  639611 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:11:16.669004  639611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:11:16.669064  639611 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:11:16.793748  639611 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:11:16.793925  639611 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:11:17.712301  636397 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:11:17.712380  636397 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:11:17.712515  636397 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:11:17.712609  636397 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:11:17.712667  636397 kubeadm.go:319] OS: Linux
	I1124 03:11:17.712717  636397 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:11:17.712772  636397 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:11:17.712846  636397 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:11:17.712998  636397 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:11:17.713081  636397 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:11:17.713158  636397 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:11:17.713228  636397 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:11:17.713298  636397 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:11:17.713410  636397 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:11:17.713559  636397 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:11:17.713706  636397 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:11:17.713767  636397 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:11:17.715195  636397 out.go:252]   - Generating certificates and keys ...
	I1124 03:11:17.715298  636397 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:11:17.715442  636397 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:11:17.715523  636397 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:11:17.715597  636397 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:11:17.715657  636397 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:11:17.715733  636397 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:11:17.715822  636397 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:11:17.716053  636397 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-993813 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:11:17.716134  636397 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:11:17.716334  636397 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-993813 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:11:17.716443  636397 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:11:17.716537  636397 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:11:17.716600  636397 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:11:17.716682  636397 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:11:17.716772  636397 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:11:17.716823  636397 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:11:17.716938  636397 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:11:17.717053  636397 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:11:17.717141  636397 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:11:17.717221  636397 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:11:17.717295  636397 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:11:17.718959  636397 out.go:252]   - Booting up control plane ...
	I1124 03:11:17.719049  636397 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:11:17.719135  636397 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:11:17.719219  636397 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:11:17.719341  636397 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:11:17.719462  636397 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:11:17.719560  636397 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:11:17.719632  636397 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:11:17.719681  636397 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:11:17.719830  636397 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:11:17.719976  636397 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:11:17.720049  636397 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501467711s
	I1124 03:11:17.720160  636397 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:11:17.720268  636397 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1124 03:11:17.720406  636397 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:11:17.720513  636397 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:11:17.720614  636397 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.599087563s
	I1124 03:11:17.720742  636397 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.501028525s
	I1124 03:11:17.720844  636397 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00179766s
	I1124 03:11:17.721018  636397 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:11:17.721192  636397 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:11:17.721298  636397 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:11:17.721558  636397 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-993813 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:11:17.721622  636397 kubeadm.go:319] [bootstrap-token] Using token: q5wdgj.p9bwnkl5amhf01kb
	I1124 03:11:17.722776  636397 out.go:252]   - Configuring RBAC rules ...
	I1124 03:11:17.722949  636397 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:11:17.723089  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:11:17.723273  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:11:17.723470  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:11:17.723636  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:11:17.723759  636397 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:11:17.723924  636397 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:11:17.723997  636397 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:11:17.724057  636397 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:11:17.724062  636397 kubeadm.go:319] 
	I1124 03:11:17.724140  636397 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:11:17.724145  636397 kubeadm.go:319] 
	I1124 03:11:17.724249  636397 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:11:17.724254  636397 kubeadm.go:319] 
	I1124 03:11:17.724288  636397 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:11:17.724365  636397 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:11:17.724429  636397 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:11:17.724434  636397 kubeadm.go:319] 
	I1124 03:11:17.724504  636397 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:11:17.724509  636397 kubeadm.go:319] 
	I1124 03:11:17.724570  636397 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:11:17.724576  636397 kubeadm.go:319] 
	I1124 03:11:17.724642  636397 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:11:17.724751  636397 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:11:17.724845  636397 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:11:17.724850  636397 kubeadm.go:319] 
	I1124 03:11:17.724962  636397 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:11:17.725053  636397 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:11:17.725058  636397 kubeadm.go:319] 
	I1124 03:11:17.725156  636397 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token q5wdgj.p9bwnkl5amhf01kb \
	I1124 03:11:17.725281  636397 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:11:17.725306  636397 kubeadm.go:319] 	--control-plane 
	I1124 03:11:17.725311  636397 kubeadm.go:319] 
	I1124 03:11:17.725412  636397 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:11:17.725417  636397 kubeadm.go:319] 
	I1124 03:11:17.725515  636397 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token q5wdgj.p9bwnkl5amhf01kb \
	I1124 03:11:17.725654  636397 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:11:17.725664  636397 cni.go:84] Creating CNI manager for ""
	I1124 03:11:17.725672  636397 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:17.727357  636397 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:11:17.018572  631782 out.go:252]   - Configuring RBAC rules ...
	I1124 03:11:17.018732  631782 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:11:17.021245  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:11:17.025919  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:11:17.028242  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:11:17.030590  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:11:17.032723  631782 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:11:17.380197  631782 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:11:17.802727  631782 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:11:18.381075  631782 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:11:18.382320  631782 kubeadm.go:319] 
	I1124 03:11:18.382408  631782 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:11:18.382416  631782 kubeadm.go:319] 
	I1124 03:11:18.382508  631782 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:11:18.382522  631782 kubeadm.go:319] 
	I1124 03:11:18.382554  631782 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:11:18.382630  631782 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:11:18.382704  631782 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:11:18.382712  631782 kubeadm.go:319] 
	I1124 03:11:18.382781  631782 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:11:18.382791  631782 kubeadm.go:319] 
	I1124 03:11:18.382850  631782 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:11:18.382859  631782 kubeadm.go:319] 
	I1124 03:11:18.382948  631782 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:11:18.383059  631782 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:11:18.383153  631782 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:11:18.383164  631782 kubeadm.go:319] 
	I1124 03:11:18.383265  631782 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:11:18.383360  631782 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:11:18.383370  631782 kubeadm.go:319] 
	I1124 03:11:18.383510  631782 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qixyjy.v1lkfw8d9c2mcnrf \
	I1124 03:11:18.383708  631782 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:11:18.383747  631782 kubeadm.go:319] 	--control-plane 
	I1124 03:11:18.383767  631782 kubeadm.go:319] 
	I1124 03:11:18.383880  631782 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:11:18.383909  631782 kubeadm.go:319] 
	I1124 03:11:18.384037  631782 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qixyjy.v1lkfw8d9c2mcnrf \
	I1124 03:11:18.384180  631782 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:11:18.387182  631782 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:11:18.387348  631782 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:11:18.387386  631782 cni.go:84] Creating CNI manager for ""
	I1124 03:11:18.387399  631782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:18.389706  631782 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:11:17.729080  636397 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:11:17.735280  636397 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:11:17.735299  636397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:11:17.750224  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:11:17.964488  636397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:11:17.964571  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:17.964583  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-993813 minikube.k8s.io/updated_at=2025_11_24T03_11_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=default-k8s-diff-port-993813 minikube.k8s.io/primary=true
	I1124 03:11:17.977541  636397 ops.go:34] apiserver oom_adj: -16
	I1124 03:11:18.089531  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:18.589931  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:17.757544  623347 node_ready.go:49] node "old-k8s-version-579951" is "Ready"
	I1124 03:11:17.757568  623347 node_ready.go:38] duration metric: took 13.503706583s for node "old-k8s-version-579951" to be "Ready" ...
	I1124 03:11:17.757591  623347 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:17.757632  623347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:11:17.769351  623347 api_server.go:72] duration metric: took 13.944624755s to wait for apiserver process to appear ...
	I1124 03:11:17.769381  623347 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:17.769404  623347 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 03:11:17.773486  623347 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 03:11:17.774606  623347 api_server.go:141] control plane version: v1.28.0
	I1124 03:11:17.774639  623347 api_server.go:131] duration metric: took 5.249615ms to wait for apiserver health ...
	I1124 03:11:17.774650  623347 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:17.778732  623347 system_pods.go:59] 8 kube-system pods found
	I1124 03:11:17.778769  623347 system_pods.go:61] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:17.778779  623347 system_pods.go:61] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:17.778787  623347 system_pods.go:61] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:17.778792  623347 system_pods.go:61] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:17.778797  623347 system_pods.go:61] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:17.778806  623347 system_pods.go:61] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:17.778810  623347 system_pods.go:61] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:17.778817  623347 system_pods.go:61] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:17.778824  623347 system_pods.go:74] duration metric: took 4.167214ms to wait for pod list to return data ...
	I1124 03:11:17.778835  623347 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:17.781411  623347 default_sa.go:45] found service account: "default"
	I1124 03:11:17.781435  623347 default_sa.go:55] duration metric: took 2.594162ms for default service account to be created ...
	I1124 03:11:17.781446  623347 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:11:17.784981  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:17.785018  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:17.785031  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:17.785044  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:17.785050  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:17.785061  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:17.785066  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:17.785076  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:17.785090  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:17.785127  623347 retry.go:31] will retry after 271.484184ms: missing components: kube-dns
	I1124 03:11:18.065194  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:18.065237  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:18.065248  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:18.065257  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:18.065263  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:18.065269  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:18.065274  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:18.065279  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:18.065287  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:18.065306  623347 retry.go:31] will retry after 388.018904ms: missing components: kube-dns
	I1124 03:11:18.457864  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:18.457936  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:18.457946  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:18.457961  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:18.457972  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:18.457978  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:18.457984  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:18.457991  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:18.457999  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:18.458022  623347 retry.go:31] will retry after 449.601826ms: missing components: kube-dns
	I1124 03:11:18.911831  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:18.911859  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Running
	I1124 03:11:18.911865  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:18.911869  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:18.911873  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:18.911877  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:18.911880  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:18.911916  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:18.911921  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Running
	I1124 03:11:18.911931  623347 system_pods.go:126] duration metric: took 1.130477915s to wait for k8s-apps to be running ...
	I1124 03:11:18.911944  623347 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:11:18.911996  623347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:11:18.925774  623347 system_svc.go:56] duration metric: took 13.819357ms WaitForService to wait for kubelet
	I1124 03:11:18.925804  623347 kubeadm.go:587] duration metric: took 15.101081639s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:11:18.925827  623347 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:18.928599  623347 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:18.928633  623347 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:18.928652  623347 node_conditions.go:105] duration metric: took 2.818338ms to run NodePressure ...
	I1124 03:11:18.928667  623347 start.go:242] waiting for startup goroutines ...
	I1124 03:11:18.928681  623347 start.go:247] waiting for cluster config update ...
	I1124 03:11:18.928701  623347 start.go:256] writing updated cluster config ...
	I1124 03:11:18.929049  623347 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:18.933285  623347 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:18.937686  623347 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.946299  623347 pod_ready.go:94] pod "coredns-5dd5756b68-5nwx9" is "Ready"
	I1124 03:11:18.946320  623347 pod_ready.go:86] duration metric: took 8.611977ms for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.950801  623347 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.960988  623347 pod_ready.go:94] pod "etcd-old-k8s-version-579951" is "Ready"
	I1124 03:11:18.961015  623347 pod_ready.go:86] duration metric: took 10.19455ms for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.965881  623347 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.974882  623347 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-579951" is "Ready"
	I1124 03:11:18.974933  623347 pod_ready.go:86] duration metric: took 9.016779ms for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.977770  623347 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:19.341020  623347 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-579951" is "Ready"
	I1124 03:11:19.341052  623347 pod_ready.go:86] duration metric: took 363.250058ms for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:19.538869  623347 pod_ready.go:83] waiting for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:19.937877  623347 pod_ready.go:94] pod "kube-proxy-r82jh" is "Ready"
	I1124 03:11:19.937925  623347 pod_ready.go:86] duration metric: took 399.001292ms for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:20.140275  623347 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:20.537761  623347 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-579951" is "Ready"
	I1124 03:11:20.537795  623347 pod_ready.go:86] duration metric: took 397.491187ms for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:20.537812  623347 pod_ready.go:40] duration metric: took 1.604492738s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:20.582109  623347 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 03:11:20.583699  623347 out.go:203] 
	W1124 03:11:20.584752  623347 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 03:11:20.585796  623347 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:11:20.587217  623347 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-579951" cluster and "default" namespace by default
	I1124 03:11:17.795245  639611 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001564938s
	I1124 03:11:17.799260  639611 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:11:17.799423  639611 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1124 03:11:17.799562  639611 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:11:17.799651  639611 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:11:20.070827  639611 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.271449475s
	I1124 03:11:20.290602  639611 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.491348646s
	I1124 03:11:21.801475  639611 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002149825s
	I1124 03:11:21.812595  639611 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:11:21.822553  639611 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:11:21.831169  639611 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:11:21.831446  639611 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-438041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:11:21.841628  639611 kubeadm.go:319] [bootstrap-token] Using token: yx8fea.c13myzzt6w383nef
	I1124 03:11:21.842995  639611 out.go:252]   - Configuring RBAC rules ...
	I1124 03:11:21.843145  639611 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:11:21.846076  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:11:21.851007  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:11:21.853367  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:11:21.856222  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:11:21.859271  639611 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:11:19.090574  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:19.589602  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.090576  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.590533  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.089866  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.589593  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.089582  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.590222  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.673854  636397 kubeadm.go:1114] duration metric: took 4.709348594s to wait for elevateKubeSystemPrivileges
	I1124 03:11:22.673908  636397 kubeadm.go:403] duration metric: took 16.63377865s to StartCluster
	I1124 03:11:22.673934  636397 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:22.674008  636397 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:22.675076  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:22.675302  636397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:22.675326  636397 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:22.675390  636397 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-993813"
	I1124 03:11:22.675304  636397 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:22.675418  636397 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-993813"
	I1124 03:11:22.675431  636397 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993813"
	I1124 03:11:22.675411  636397 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-993813"
	I1124 03:11:22.675530  636397 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:11:22.675536  636397 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:22.675814  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:11:22.676034  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:11:22.676852  636397 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:22.678754  636397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:22.703150  636397 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-993813"
	I1124 03:11:22.703198  636397 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:11:22.703676  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:11:22.704736  636397 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:18.390820  631782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:11:18.395615  631782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:11:18.395633  631782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:11:18.409234  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:11:18.710608  631782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:11:18.710754  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:18.710853  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-603010 minikube.k8s.io/updated_at=2025_11_24T03_11_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=no-preload-603010 minikube.k8s.io/primary=true
	I1124 03:11:18.818373  631782 ops.go:34] apiserver oom_adj: -16
	I1124 03:11:18.818465  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:19.318531  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:19.819135  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.319402  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.819441  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.319189  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.818604  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.319077  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.706096  636397 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:22.706117  636397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:22.706176  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:22.737283  636397 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:22.737304  636397 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:22.737370  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:22.740863  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:22.761473  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:22.778645  636397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:22.830555  636397 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:22.862561  636397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:22.876089  636397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:22.963053  636397 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:22.964307  636397 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:11:23.185636  636397 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:22.209953  639611 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:11:22.623609  639611 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:11:23.207075  639611 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:11:23.208086  639611 kubeadm.go:319] 
	I1124 03:11:23.208184  639611 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:11:23.208202  639611 kubeadm.go:319] 
	I1124 03:11:23.208296  639611 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:11:23.208304  639611 kubeadm.go:319] 
	I1124 03:11:23.208344  639611 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:11:23.208443  639611 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:11:23.208509  639611 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:11:23.208519  639611 kubeadm.go:319] 
	I1124 03:11:23.208591  639611 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:11:23.208601  639611 kubeadm.go:319] 
	I1124 03:11:23.208661  639611 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:11:23.208671  639611 kubeadm.go:319] 
	I1124 03:11:23.208771  639611 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:11:23.208934  639611 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:11:23.209014  639611 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:11:23.209021  639611 kubeadm.go:319] 
	I1124 03:11:23.209090  639611 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:11:23.209153  639611 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:11:23.209159  639611 kubeadm.go:319] 
	I1124 03:11:23.209225  639611 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yx8fea.c13myzzt6w383nef \
	I1124 03:11:23.209329  639611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:11:23.209368  639611 kubeadm.go:319] 	--control-plane 
	I1124 03:11:23.209382  639611 kubeadm.go:319] 
	I1124 03:11:23.209513  639611 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:11:23.209523  639611 kubeadm.go:319] 
	I1124 03:11:23.209667  639611 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yx8fea.c13myzzt6w383nef \
	I1124 03:11:23.209795  639611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:11:23.212372  639611 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:11:23.212472  639611 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:11:23.212489  639611 cni.go:84] Creating CNI manager for ""
	I1124 03:11:23.212498  639611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:23.213669  639611 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:11:22.819290  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.318726  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.413238  631782 kubeadm.go:1114] duration metric: took 4.702498844s to wait for elevateKubeSystemPrivileges
	I1124 03:11:23.413274  631782 kubeadm.go:403] duration metric: took 15.686211393s to StartCluster
	I1124 03:11:23.413298  631782 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:23.413374  631782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:23.415097  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:23.415455  631782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:23.415991  631782 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:23.416200  631782 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:23.416393  631782 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:23.416478  631782 addons.go:70] Setting storage-provisioner=true in profile "no-preload-603010"
	I1124 03:11:23.416515  631782 addons.go:239] Setting addon storage-provisioner=true in "no-preload-603010"
	I1124 03:11:23.416545  631782 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:11:23.416771  631782 addons.go:70] Setting default-storageclass=true in profile "no-preload-603010"
	I1124 03:11:23.416794  631782 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603010"
	I1124 03:11:23.417522  631782 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:11:23.418922  631782 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:11:23.420690  631782 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:23.422440  631782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:23.453170  631782 addons.go:239] Setting addon default-storageclass=true in "no-preload-603010"
	I1124 03:11:23.453315  631782 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:11:23.454249  631782 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:11:23.456721  631782 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:23.187200  636397 addons.go:530] duration metric: took 511.871879ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:23.468811  636397 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-993813" context rescaled to 1 replicas
	I1124 03:11:23.457832  631782 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:23.457852  631782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:23.457945  631782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:11:23.485040  631782 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:23.485073  631782 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:23.485135  631782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:11:23.488649  631782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:11:23.522776  631782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:11:23.578154  631782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:23.637057  631782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:23.642323  631782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:23.675165  631782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:23.795763  631782 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:23.982706  631782 node_ready.go:35] waiting up to 6m0s for node "no-preload-603010" to be "Ready" ...
	I1124 03:11:23.988365  631782 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:23.214606  639611 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:11:23.218969  639611 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:11:23.219002  639611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:11:23.233030  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:11:23.530587  639611 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:11:23.530753  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.530907  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-438041 minikube.k8s.io/updated_at=2025_11_24T03_11_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=newest-cni-438041 minikube.k8s.io/primary=true
	I1124 03:11:23.553306  639611 ops.go:34] apiserver oom_adj: -16
	I1124 03:11:23.638819  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:24.139560  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:24.639641  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:25.139273  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:25.638941  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:26.139461  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:26.638988  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.989407  631782 addons.go:530] duration metric: took 573.023057ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:24.300916  631782 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-603010" context rescaled to 1 replicas
	W1124 03:11:25.985432  631782 node_ready.go:57] node "no-preload-603010" has "Ready":"False" status (will retry)
	I1124 03:11:27.139734  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:27.639015  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:28.139551  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:28.207738  639611 kubeadm.go:1114] duration metric: took 4.677029552s to wait for elevateKubeSystemPrivileges
	I1124 03:11:28.207780  639611 kubeadm.go:403] duration metric: took 16.661698302s to StartCluster
	I1124 03:11:28.207804  639611 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:28.207878  639611 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:28.209479  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:28.209719  639611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:28.209737  639611 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:28.209814  639611 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:28.209929  639611 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-438041"
	I1124 03:11:28.209946  639611 addons.go:70] Setting default-storageclass=true in profile "newest-cni-438041"
	I1124 03:11:28.209971  639611 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-438041"
	I1124 03:11:28.209980  639611 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-438041"
	I1124 03:11:28.210010  639611 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:11:28.210056  639611 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:28.210387  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:28.210537  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:28.211106  639611 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:28.212323  639611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:28.233230  639611 addons.go:239] Setting addon default-storageclass=true in "newest-cni-438041"
	I1124 03:11:28.233278  639611 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:11:28.233850  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:28.234771  639611 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:28.235819  639611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:28.235861  639611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:28.235962  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:28.261133  639611 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:28.261156  639611 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:28.261334  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:28.267999  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:28.289398  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:28.299784  639611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:28.359817  639611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:28.384919  639611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:28.404504  639611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:28.491961  639611 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:28.493110  639611 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:28.493157  639611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1124 03:11:28.510848  639611 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "newest-cni-438041" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1124 03:11:28.510875  639611 start.go:161] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1124 03:11:28.701114  639611 api_server.go:72] duration metric: took 491.340672ms to wait for apiserver process to appear ...
	I1124 03:11:28.701143  639611 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:28.701166  639611 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:11:28.705994  639611 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:11:28.706754  639611 api_server.go:141] control plane version: v1.34.1
	I1124 03:11:28.706781  639611 api_server.go:131] duration metric: took 5.630796ms to wait for apiserver health ...
	I1124 03:11:28.706793  639611 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:28.709054  639611 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:28.709369  639611 system_pods.go:59] 9 kube-system pods found
	I1124 03:11:28.709395  639611 system_pods.go:61] "coredns-66bc5c9577-b5rlp" [ec3ad010-7694-4640-9638-fe6f5c97f56a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:28.709402  639611 system_pods.go:61] "coredns-66bc5c9577-mwvq8" [c8831e7f-34c0-40c7-a728-7f7882ed604a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:28.709411  639611 system_pods.go:61] "etcd-newest-cni-438041" [7acbb753-dfd2-4438-b370-a7e38c4fbc5d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:11:28.709418  639611 system_pods.go:61] "kindnet-xp46p" [19fa7668-24bd-454c-a5df-37534a06d3a5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:11:28.709423  639611 system_pods.go:61] "kube-apiserver-newest-cni-438041" [c7d90375-f6c0-4a1f-8b80-81574119b191] Running
	I1124 03:11:28.709432  639611 system_pods.go:61] "kube-controller-manager-newest-cni-438041" [54b144f6-6f26-4e9b-818b-cbb2d7b4c0a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:11:28.709437  639611 system_pods.go:61] "kube-proxy-n85pg" [86f875e2-7efc-4b60-b031-a1de71ea7502] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:11:28.709447  639611 system_pods.go:61] "kube-scheduler-newest-cni-438041" [75e99a3a-d4a9-4428-a52a-ef5ac4edc76c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:11:28.709457  639611 system_pods.go:61] "storage-provisioner" [9a94c2f7-e288-4528-b22c-f413d79bdf46] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:28.709467  639611 system_pods.go:74] duration metric: took 2.667768ms to wait for pod list to return data ...
	I1124 03:11:28.709481  639611 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:28.710153  639611 addons.go:530] duration metric: took 500.34824ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:28.711298  639611 default_sa.go:45] found service account: "default"
	I1124 03:11:28.711317  639611 default_sa.go:55] duration metric: took 1.826862ms for default service account to be created ...
	I1124 03:11:28.711328  639611 kubeadm.go:587] duration metric: took 501.561139ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 03:11:28.711341  639611 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:28.713171  639611 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:28.713192  639611 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:28.713206  639611 node_conditions.go:105] duration metric: took 1.86027ms to run NodePressure ...
	I1124 03:11:28.713217  639611 start.go:242] waiting for startup goroutines ...
	I1124 03:11:28.713224  639611 start.go:247] waiting for cluster config update ...
	I1124 03:11:28.713233  639611 start.go:256] writing updated cluster config ...
	I1124 03:11:28.713443  639611 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:28.759550  639611 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:11:28.760722  639611 out.go:179] * Done! kubectl is now configured to use "newest-cni-438041" cluster and "default" namespace by default
	W1124 03:11:24.968153  636397 node_ready.go:57] node "default-k8s-diff-port-993813" has "Ready":"False" status (will retry)
	W1124 03:11:27.467212  636397 node_ready.go:57] node "default-k8s-diff-port-993813" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.08265536Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.085571786Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=c7817b7d-530a-409c-aab8-43f58d38ba44 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.086171018Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e712de65-2b95-41eb-9410-a40eb7c597d1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.087034341Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.087444268Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.087700366Z" level=info msg="Ran pod sandbox 8987162838f1215641849b55043fbd11233aaf3f685681811d02cc9e98fc9628 with infra container: kube-system/kindnet-xp46p/POD" id=c7817b7d-530a-409c-aab8-43f58d38ba44 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.088132923Z" level=info msg="Ran pod sandbox ca5a21b218f7f0215cb77dca9e6094acfdc3f91eeadecc3e54a3e8b9d3fdd8a3 with infra container: kube-system/kube-proxy-n85pg/POD" id=e712de65-2b95-41eb-9410-a40eb7c597d1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.088918759Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=03e388c7-6fdb-48f1-b9f8-14cdb762d379 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.089045671Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=ab145a82-963c-4788-adbd-34083fd3ab27 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.089953862Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=5bd0778d-8131-46e9-855c-fe4754e96cbe name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.090264838Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=95da6199-a9aa-449b-be7a-5ff171034ee3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.093652896Z" level=info msg="Creating container: kube-system/kindnet-xp46p/kindnet-cni" id=7a43ca28-6917-44dc-82c8-7eef564b5757 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.093742554Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.097693595Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.097751343Z" level=info msg="Creating container: kube-system/kube-proxy-n85pg/kube-proxy" id=a4f6a5eb-2fa2-4407-a54f-767b2f36955a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.097862102Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.098284416Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.10177365Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.102250548Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.135675422Z" level=info msg="Created container 0de05c61e11157b8005427ef28e507768ee27b46579cb2b4b03680c069a85445: kube-system/kindnet-xp46p/kindnet-cni" id=7a43ca28-6917-44dc-82c8-7eef564b5757 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.136524548Z" level=info msg="Starting container: 0de05c61e11157b8005427ef28e507768ee27b46579cb2b4b03680c069a85445" id=85d352f4-8729-4b90-bdeb-ec42bfab4c2e name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.139613887Z" level=info msg="Started container" PID=1635 containerID=0de05c61e11157b8005427ef28e507768ee27b46579cb2b4b03680c069a85445 description=kube-system/kindnet-xp46p/kindnet-cni id=85d352f4-8729-4b90-bdeb-ec42bfab4c2e name=/runtime.v1.RuntimeService/StartContainer sandboxID=8987162838f1215641849b55043fbd11233aaf3f685681811d02cc9e98fc9628
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.144881534Z" level=info msg="Created container e73b2a11d0a4519488b5d2a2ee214d09065f5e4440ab88ef522c15c1f7d4aabd: kube-system/kube-proxy-n85pg/kube-proxy" id=a4f6a5eb-2fa2-4407-a54f-767b2f36955a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.145456504Z" level=info msg="Starting container: e73b2a11d0a4519488b5d2a2ee214d09065f5e4440ab88ef522c15c1f7d4aabd" id=a65064b9-f854-4127-a4ae-69667052412c name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:11:29 newest-cni-438041 crio[778]: time="2025-11-24T03:11:29.148808582Z" level=info msg="Started container" PID=1636 containerID=e73b2a11d0a4519488b5d2a2ee214d09065f5e4440ab88ef522c15c1f7d4aabd description=kube-system/kube-proxy-n85pg/kube-proxy id=a65064b9-f854-4127-a4ae-69667052412c name=/runtime.v1.RuntimeService/StartContainer sandboxID=ca5a21b218f7f0215cb77dca9e6094acfdc3f91eeadecc3e54a3e8b9d3fdd8a3
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e73b2a11d0a45       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   Less than a second ago   Running             kube-proxy                0                   ca5a21b218f7f       kube-proxy-n85pg                            kube-system
	0de05c61e1115       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   Less than a second ago   Running             kindnet-cni               0                   8987162838f12       kindnet-xp46p                               kube-system
	7bae9ea81e171       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago           Running             etcd                      0                   d2a7d0abb011e       etcd-newest-cni-438041                      kube-system
	2a974d2f19269       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago           Running             kube-controller-manager   0                   f864ec09d844d       kube-controller-manager-newest-cni-438041   kube-system
	bee6907689e85       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago           Running             kube-scheduler            0                   14a59434523c0       kube-scheduler-newest-cni-438041            kube-system
	a84ce4fcd9611       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago           Running             kube-apiserver            0                   5330f8fae02d7       kube-apiserver-newest-cni-438041            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-438041
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-438041
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=newest-cni-438041
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_11_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:11:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-438041
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:11:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:11:22 +0000   Mon, 24 Nov 2025 03:11:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:11:22 +0000   Mon, 24 Nov 2025 03:11:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:11:22 +0000   Mon, 24 Nov 2025 03:11:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 03:11:22 +0000   Mon, 24 Nov 2025 03:11:18 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-438041
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                6b4f4c50-807c-4c82-a9aa-10eb04614b7a
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-438041                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-xp46p                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-438041             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-438041    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-n85pg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-438041             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 0s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s    kubelet          Node newest-cni-438041 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s    kubelet          Node newest-cni-438041 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s    kubelet          Node newest-cni-438041 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-438041 event: Registered Node newest-cni-438041 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [7bae9ea81e171a58a2c38f7d4fe6ff2652623ed0eaf6737d895ee205f40c14e8] <==
	{"level":"warn","ts":"2025-11-24T03:11:19.572603Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.588327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.606192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.613536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.620273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.628592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.636984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.643996Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.654767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.658020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.667062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.672863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.678762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.692878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.699035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.706332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.713098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.719146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.725802Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.734035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.743531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.756505Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.762995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.770025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:19.828315Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44736","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:11:30 up  1:53,  0 user,  load average: 6.13, 4.20, 2.58
	Linux newest-cni-438041 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0de05c61e11157b8005427ef28e507768ee27b46579cb2b4b03680c069a85445] <==
	I1124 03:11:29.336225       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:11:29.336513       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 03:11:29.336712       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:11:29.336737       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:11:29.336759       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:11:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:11:29.617832       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:11:29.617911       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:11:29.617926       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:11:29.618089       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:11:30.018748       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:11:30.018786       1 metrics.go:72] Registering metrics
	I1124 03:11:30.018924       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [a84ce4fcd9611b2a3281f842c2f84525adf5250256fb8785d078cfff92f617ca] <==
	I1124 03:11:20.320346       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 03:11:20.320559       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 03:11:20.320584       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1124 03:11:20.321348       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:11:20.325314       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:11:20.331985       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:11:20.343564       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:11:20.359162       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:11:21.224629       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:11:21.228312       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:11:21.228329       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:11:21.666999       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:11:21.718641       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:11:21.828583       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:11:21.835846       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 03:11:21.836925       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:11:21.841581       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:11:22.260366       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:11:22.614270       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:11:22.622710       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:11:22.630325       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:11:27.261275       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:11:28.212123       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:11:28.216007       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:11:28.362519       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [2a974d2f19269b66e7aa44028506f098f36cc243ae1441c71724453b74bcfbe2] <==
	I1124 03:11:27.225487       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 03:11:27.226421       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:11:27.232600       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:11:27.258560       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:11:27.258583       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:11:27.258632       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:11:27.258828       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 03:11:27.259502       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 03:11:27.260677       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:11:27.260791       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 03:11:27.260801       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 03:11:27.260796       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 03:11:27.261363       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 03:11:27.263081       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 03:11:27.263187       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 03:11:27.263833       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:11:27.266085       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 03:11:27.266152       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 03:11:27.266203       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 03:11:27.266221       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 03:11:27.266228       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 03:11:27.270002       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 03:11:27.273259       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-438041" podCIDRs=["10.42.0.0/24"]
	I1124 03:11:27.280358       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:11:27.285690       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e73b2a11d0a4519488b5d2a2ee214d09065f5e4440ab88ef522c15c1f7d4aabd] <==
	I1124 03:11:29.185298       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:11:29.267995       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:11:29.368859       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:11:29.368942       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 03:11:29.369051       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:11:29.389080       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:11:29.389136       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:11:29.394439       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:11:29.394777       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:11:29.394801       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:11:29.397082       1 config.go:200] "Starting service config controller"
	I1124 03:11:29.397118       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:11:29.397156       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:11:29.397163       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:11:29.397179       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:11:29.397185       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:11:29.397222       1 config.go:309] "Starting node config controller"
	I1124 03:11:29.397246       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:11:29.497661       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:11:29.497716       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:11:29.497871       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:11:29.497912       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [bee6907689e85ef7510d7404170976dbd06e73c20c66837fb00638125013678c] <==
	E1124 03:11:20.289167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:11:20.289182       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:11:20.289191       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:11:20.289231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:11:20.289274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:11:20.289289       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:11:20.289341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:11:20.289136       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:11:20.289382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:11:20.289409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:11:20.289365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:11:20.289423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:11:20.289431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:11:21.114228       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:11:21.171749       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:11:21.187989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:11:21.188127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:11:21.240590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:11:21.282064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:11:21.314425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:11:21.322507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:11:21.325572       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 03:11:21.358707       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:11:21.368982       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	I1124 03:11:23.584073       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:11:23 newest-cni-438041 kubelet[1318]: I1124 03:11:23.602928    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-438041" podStartSLOduration=1.602863128 podStartE2EDuration="1.602863128s" podCreationTimestamp="2025-11-24 03:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:23.602258626 +0000 UTC m=+1.217835534" watchObservedRunningTime="2025-11-24 03:11:23.602863128 +0000 UTC m=+1.218440035"
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: I1124 03:11:27.275129    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-438041" podStartSLOduration=5.275101158 podStartE2EDuration="5.275101158s" podCreationTimestamp="2025-11-24 03:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:23.615434877 +0000 UTC m=+1.231011785" watchObservedRunningTime="2025-11-24 03:11:27.275101158 +0000 UTC m=+4.890678065"
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: I1124 03:11:27.293578    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19fa7668-24bd-454c-a5df-37534a06d3a5-xtables-lock\") pod \"kindnet-xp46p\" (UID: \"19fa7668-24bd-454c-a5df-37534a06d3a5\") " pod="kube-system/kindnet-xp46p"
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: I1124 03:11:27.293632    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19fa7668-24bd-454c-a5df-37534a06d3a5-lib-modules\") pod \"kindnet-xp46p\" (UID: \"19fa7668-24bd-454c-a5df-37534a06d3a5\") " pod="kube-system/kindnet-xp46p"
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: I1124 03:11:27.293657    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/19fa7668-24bd-454c-a5df-37534a06d3a5-cni-cfg\") pod \"kindnet-xp46p\" (UID: \"19fa7668-24bd-454c-a5df-37534a06d3a5\") " pod="kube-system/kindnet-xp46p"
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: I1124 03:11:27.293677    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86f875e2-7efc-4b60-b031-a1de71ea7502-lib-modules\") pod \"kube-proxy-n85pg\" (UID: \"86f875e2-7efc-4b60-b031-a1de71ea7502\") " pod="kube-system/kube-proxy-n85pg"
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: I1124 03:11:27.293757    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6t99\" (UniqueName: \"kubernetes.io/projected/19fa7668-24bd-454c-a5df-37534a06d3a5-kube-api-access-w6t99\") pod \"kindnet-xp46p\" (UID: \"19fa7668-24bd-454c-a5df-37534a06d3a5\") " pod="kube-system/kindnet-xp46p"
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: I1124 03:11:27.293849    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/86f875e2-7efc-4b60-b031-a1de71ea7502-kube-proxy\") pod \"kube-proxy-n85pg\" (UID: \"86f875e2-7efc-4b60-b031-a1de71ea7502\") " pod="kube-system/kube-proxy-n85pg"
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: I1124 03:11:27.293928    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86f875e2-7efc-4b60-b031-a1de71ea7502-xtables-lock\") pod \"kube-proxy-n85pg\" (UID: \"86f875e2-7efc-4b60-b031-a1de71ea7502\") " pod="kube-system/kube-proxy-n85pg"
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: I1124 03:11:27.294006    1318 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfq46\" (UniqueName: \"kubernetes.io/projected/86f875e2-7efc-4b60-b031-a1de71ea7502-kube-api-access-qfq46\") pod \"kube-proxy-n85pg\" (UID: \"86f875e2-7efc-4b60-b031-a1de71ea7502\") " pod="kube-system/kube-proxy-n85pg"
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: I1124 03:11:27.322217    1318 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: I1124 03:11:27.322929    1318 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: E1124 03:11:27.400092    1318 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: E1124 03:11:27.400128    1318 projected.go:196] Error preparing data for projected volume kube-api-access-w6t99 for pod kube-system/kindnet-xp46p: configmap "kube-root-ca.crt" not found
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: E1124 03:11:27.400216    1318 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19fa7668-24bd-454c-a5df-37534a06d3a5-kube-api-access-w6t99 podName:19fa7668-24bd-454c-a5df-37534a06d3a5 nodeName:}" failed. No retries permitted until 2025-11-24 03:11:27.900179201 +0000 UTC m=+5.515756107 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-w6t99" (UniqueName: "kubernetes.io/projected/19fa7668-24bd-454c-a5df-37534a06d3a5-kube-api-access-w6t99") pod "kindnet-xp46p" (UID: "19fa7668-24bd-454c-a5df-37534a06d3a5") : configmap "kube-root-ca.crt" not found
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: E1124 03:11:27.400609    1318 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: E1124 03:11:27.400637    1318 projected.go:196] Error preparing data for projected volume kube-api-access-qfq46 for pod kube-system/kube-proxy-n85pg: configmap "kube-root-ca.crt" not found
	Nov 24 03:11:27 newest-cni-438041 kubelet[1318]: E1124 03:11:27.400691    1318 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/86f875e2-7efc-4b60-b031-a1de71ea7502-kube-api-access-qfq46 podName:86f875e2-7efc-4b60-b031-a1de71ea7502 nodeName:}" failed. No retries permitted until 2025-11-24 03:11:27.900676813 +0000 UTC m=+5.516253722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qfq46" (UniqueName: "kubernetes.io/projected/86f875e2-7efc-4b60-b031-a1de71ea7502-kube-api-access-qfq46") pod "kube-proxy-n85pg" (UID: "86f875e2-7efc-4b60-b031-a1de71ea7502") : configmap "kube-root-ca.crt" not found
	Nov 24 03:11:28 newest-cni-438041 kubelet[1318]: E1124 03:11:28.000469    1318 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:11:28 newest-cni-438041 kubelet[1318]: E1124 03:11:28.000511    1318 projected.go:196] Error preparing data for projected volume kube-api-access-qfq46 for pod kube-system/kube-proxy-n85pg: configmap "kube-root-ca.crt" not found
	Nov 24 03:11:28 newest-cni-438041 kubelet[1318]: E1124 03:11:28.000578    1318 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/86f875e2-7efc-4b60-b031-a1de71ea7502-kube-api-access-qfq46 podName:86f875e2-7efc-4b60-b031-a1de71ea7502 nodeName:}" failed. No retries permitted until 2025-11-24 03:11:29.000554385 +0000 UTC m=+6.616131293 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-qfq46" (UniqueName: "kubernetes.io/projected/86f875e2-7efc-4b60-b031-a1de71ea7502-kube-api-access-qfq46") pod "kube-proxy-n85pg" (UID: "86f875e2-7efc-4b60-b031-a1de71ea7502") : configmap "kube-root-ca.crt" not found
	Nov 24 03:11:28 newest-cni-438041 kubelet[1318]: E1124 03:11:28.000487    1318 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:11:28 newest-cni-438041 kubelet[1318]: E1124 03:11:28.000617    1318 projected.go:196] Error preparing data for projected volume kube-api-access-w6t99 for pod kube-system/kindnet-xp46p: configmap "kube-root-ca.crt" not found
	Nov 24 03:11:28 newest-cni-438041 kubelet[1318]: E1124 03:11:28.000690    1318 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/19fa7668-24bd-454c-a5df-37534a06d3a5-kube-api-access-w6t99 podName:19fa7668-24bd-454c-a5df-37534a06d3a5 nodeName:}" failed. No retries permitted until 2025-11-24 03:11:29.000668507 +0000 UTC m=+6.616245431 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-w6t99" (UniqueName: "kubernetes.io/projected/19fa7668-24bd-454c-a5df-37534a06d3a5-kube-api-access-w6t99") pod "kindnet-xp46p" (UID: "19fa7668-24bd-454c-a5df-37534a06d3a5") : configmap "kube-root-ca.crt" not found
	Nov 24 03:11:29 newest-cni-438041 kubelet[1318]: I1124 03:11:29.545078    1318 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n85pg" podStartSLOduration=2.545055996 podStartE2EDuration="2.545055996s" podCreationTimestamp="2025-11-24 03:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:29.53296381 +0000 UTC m=+7.148540711" watchObservedRunningTime="2025-11-24 03:11:29.545055996 +0000 UTC m=+7.160632903"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-438041 -n newest-cni-438041
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-438041 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-b5rlp coredns-66bc5c9577-mwvq8 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-438041 describe pod coredns-66bc5c9577-b5rlp coredns-66bc5c9577-mwvq8 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-438041 describe pod coredns-66bc5c9577-b5rlp coredns-66bc5c9577-mwvq8 storage-provisioner: exit status 1 (66.220998ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-b5rlp" not found
	Error from server (NotFound): pods "coredns-66bc5c9577-mwvq8" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-438041 describe pod coredns-66bc5c9577-b5rlp coredns-66bc5c9577-mwvq8 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-579951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-579951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (266.30215ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:11:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-579951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-579951 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-579951 describe deploy/metrics-server -n kube-system: exit status 1 (63.108367ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-579951 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-579951
helpers_test.go:243: (dbg) docker inspect old-k8s-version-579951:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7",
	        "Created": "2025-11-24T03:10:32.99838887Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 625680,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:10:33.040246436Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7/hosts",
	        "LogPath": "/var/lib/docker/containers/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7-json.log",
	        "Name": "/old-k8s-version-579951",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-579951:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-579951",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7",
	                "LowerDir": "/var/lib/docker/overlay2/7475b95118b29264700554efe3a336409ac4e9cbb1a3e2671ad217c583d5e887-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7475b95118b29264700554efe3a336409ac4e9cbb1a3e2671ad217c583d5e887/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7475b95118b29264700554efe3a336409ac4e9cbb1a3e2671ad217c583d5e887/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7475b95118b29264700554efe3a336409ac4e9cbb1a3e2671ad217c583d5e887/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-579951",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-579951/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-579951",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-579951",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-579951",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "de521a04cfbdcc45baea14cb0323c17f10d10d96b1d9a94fc4e029fd18648620",
	            "SandboxKey": "/var/run/docker/netns/de521a04cfbd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-579951": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ca041b7f18e6d1ec0481cbe24b969048a40ddf73308219ebc68c053037d8a9f",
	                    "EndpointID": "0cc83b0197648b0a8d8a94604b8e9728f0d64a4d5c85a04beec3e4a797b983ee",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "1e:e0:6f:0a:d7:e3",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-579951",
	                        "3f9d9080b81a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-579951 -n old-k8s-version-579951
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-579951 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-579951 logs -n 25: (1.057909833s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p flannel-965704 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                                                                                       │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ delete  │ -p kubernetes-upgrade-034173                                                                                                                                                                                                                  │ kubernetes-upgrade-034173    │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /var/lib/kubelet/config.yaml                                                                                                                                                                                       │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                        │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo systemctl cat docker --no-pager                                                                                                                                                                                        │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /etc/docker/daemon.json                                                                                                                                                                                            │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo docker system info                                                                                                                                                                                                     │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ start   │ -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                               │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                         │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cri-dockerd --version                                                                                                                                                                                                  │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo systemctl cat containerd --no-pager                                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                             │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /etc/containerd/config.toml                                                                                                                                                                                        │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo containerd config dump                                                                                                                                                                                                 │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo crio config                                                                                                                                                                                                            │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ delete  │ -p flannel-965704                                                                                                                                                                                                                             │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:11 UTC │
	│ addons  │ enable metrics-server -p newest-cni-438041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-579951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:10:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:10:57.127829  639611 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:10:57.127990  639611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:10:57.128000  639611 out.go:374] Setting ErrFile to fd 2...
	I1124 03:10:57.128004  639611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:10:57.128242  639611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:10:57.128839  639611 out.go:368] Setting JSON to false
	I1124 03:10:57.129993  639611 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6804,"bootTime":1763947053,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:10:57.130043  639611 start.go:143] virtualization: kvm guest
	I1124 03:10:57.131842  639611 out.go:179] * [newest-cni-438041] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:10:57.133006  639611 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:10:57.133003  639611 notify.go:221] Checking for updates...
	I1124 03:10:57.135165  639611 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:10:57.136402  639611 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:10:57.137671  639611 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:10:57.138741  639611 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:10:57.139904  639611 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:10:57.141390  639611 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:10:57.141496  639611 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:10:57.141578  639611 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:10:57.141703  639611 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:10:57.166641  639611 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:10:57.166738  639611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:10:57.221961  639611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-24 03:10:57.211378242 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:10:57.222054  639611 docker.go:319] overlay module found
	I1124 03:10:57.223745  639611 out.go:179] * Using the docker driver based on user configuration
	I1124 03:10:57.224957  639611 start.go:309] selected driver: docker
	I1124 03:10:57.224977  639611 start.go:927] validating driver "docker" against <nil>
	I1124 03:10:57.224994  639611 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:10:57.225758  639611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:10:57.290865  639611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-24 03:10:57.279924959 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:10:57.291115  639611 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1124 03:10:57.291161  639611 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1124 03:10:57.291452  639611 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 03:10:57.293881  639611 out.go:179] * Using Docker driver with root privileges
	I1124 03:10:57.295058  639611 cni.go:84] Creating CNI manager for ""
	I1124 03:10:57.295146  639611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:10:57.295161  639611 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:10:57.295265  639611 start.go:353] cluster config:
	{Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:10:57.296817  639611 out.go:179] * Starting "newest-cni-438041" primary control-plane node in "newest-cni-438041" cluster
	I1124 03:10:57.297866  639611 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:10:57.299907  639611 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:10:57.301070  639611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:10:57.301103  639611 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:10:57.301112  639611 cache.go:65] Caching tarball of preloaded images
	I1124 03:10:57.301177  639611 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:10:57.301210  639611 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:10:57.301222  639611 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:10:57.301343  639611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json ...
	I1124 03:10:57.301366  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json: {Name:mk1bf53574cdc9152c6531d50672e7a950b9d2f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:10:57.325407  639611 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:10:57.325433  639611 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:10:57.325454  639611 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:10:57.325494  639611 start.go:360] acquireMachinesLock for newest-cni-438041: {Name:mk895e89056f5ce7564002ba75457dcfde41ce4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:10:57.325596  639611 start.go:364] duration metric: took 82.202µs to acquireMachinesLock for "newest-cni-438041"
	I1124 03:10:57.325624  639611 start.go:93] Provisioning new machine with config: &{Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:10:57.325724  639611 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:10:55.541109  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (3.244075519s)
	I1124 03:10:55.541150  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1124 03:10:55.541172  631782 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 03:10:55.541227  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 03:10:56.794831  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.25357343s)
	I1124 03:10:56.794863  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 03:10:56.794908  631782 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 03:10:56.794989  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 03:10:55.833612  636397 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993813:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (5.620337954s)
	I1124 03:10:55.833645  636397 kic.go:203] duration metric: took 5.620509753s to extract preloaded images to volume ...
	W1124 03:10:55.833730  636397 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:10:55.833774  636397 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:10:55.833824  636397 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:10:55.899529  636397 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-993813 --name default-k8s-diff-port-993813 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993813 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-993813 --network default-k8s-diff-port-993813 --ip 192.168.76.2 --volume default-k8s-diff-port-993813:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:10:56.489655  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Running}}
	I1124 03:10:56.513036  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:10:56.535229  636397 cli_runner.go:164] Run: docker exec default-k8s-diff-port-993813 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:10:56.595848  636397 oci.go:144] the created container "default-k8s-diff-port-993813" has a running status.
	I1124 03:10:56.595922  636397 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa...
	I1124 03:10:56.701587  636397 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:10:56.875193  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:10:56.894915  636397 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:10:56.894937  636397 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-993813 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:10:56.946242  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:10:56.964911  636397 machine.go:94] provisionDockerMachine start ...
	I1124 03:10:56.965003  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:10:56.983380  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:10:56.983615  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:10:56.983627  636397 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:10:56.984346  636397 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37014->127.0.0.1:33468: read: connection reset by peer
	I1124 03:10:57.234863  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:57.734595  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:58.234694  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:58.734330  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:59.234707  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:59.735106  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:00.234710  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:00.735086  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:01.235238  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:01.735122  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:57.328166  639611 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:10:57.328471  639611 start.go:159] libmachine.API.Create for "newest-cni-438041" (driver="docker")
	I1124 03:10:57.328503  639611 client.go:173] LocalClient.Create starting
	I1124 03:10:57.328585  639611 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 03:10:57.328619  639611 main.go:143] libmachine: Decoding PEM data...
	I1124 03:10:57.328645  639611 main.go:143] libmachine: Parsing certificate...
	I1124 03:10:57.328730  639611 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 03:10:57.328758  639611 main.go:143] libmachine: Decoding PEM data...
	I1124 03:10:57.328776  639611 main.go:143] libmachine: Parsing certificate...
	I1124 03:10:57.329238  639611 cli_runner.go:164] Run: docker network inspect newest-cni-438041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:10:57.347161  639611 cli_runner.go:211] docker network inspect newest-cni-438041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:10:57.347240  639611 network_create.go:284] running [docker network inspect newest-cni-438041] to gather additional debugging logs...
	I1124 03:10:57.347259  639611 cli_runner.go:164] Run: docker network inspect newest-cni-438041
	W1124 03:10:57.366750  639611 cli_runner.go:211] docker network inspect newest-cni-438041 returned with exit code 1
	I1124 03:10:57.366777  639611 network_create.go:287] error running [docker network inspect newest-cni-438041]: docker network inspect newest-cni-438041: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-438041 not found
	I1124 03:10:57.366807  639611 network_create.go:289] output of [docker network inspect newest-cni-438041]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-438041 not found
	
	** /stderr **
	I1124 03:10:57.366976  639611 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:10:57.385293  639611 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-58688175ab6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:9d:6a:fc:5c:13} reservation:<nil>}
	I1124 03:10:57.386152  639611 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf033568456f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:35:42:16:23:28} reservation:<nil>}
	I1124 03:10:57.387409  639611 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ecb12099844 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ce:07:19:c6:91:6e} reservation:<nil>}
	I1124 03:10:57.388971  639611 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-50b2e4e61586 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:0d:88:19:7d:df} reservation:<nil>}
	I1124 03:10:57.389487  639611 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6fb41680caed IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:a0:6c:44:95:b2} reservation:<nil>}
	I1124 03:10:57.390236  639611 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018f44a0}
	I1124 03:10:57.390257  639611 network_create.go:124] attempt to create docker network newest-cni-438041 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 03:10:57.390305  639611 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-438041 newest-cni-438041
	I1124 03:10:57.440525  639611 network_create.go:108] docker network newest-cni-438041 192.168.94.0/24 created
	I1124 03:10:57.440568  639611 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-438041" container
	I1124 03:10:57.440642  639611 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:10:57.458704  639611 cli_runner.go:164] Run: docker volume create newest-cni-438041 --label name.minikube.sigs.k8s.io=newest-cni-438041 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:10:57.476351  639611 oci.go:103] Successfully created a docker volume newest-cni-438041
	I1124 03:10:57.476450  639611 cli_runner.go:164] Run: docker run --rm --name newest-cni-438041-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-438041 --entrypoint /usr/bin/test -v newest-cni-438041:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:10:58.353729  639611 oci.go:107] Successfully prepared a docker volume newest-cni-438041
	I1124 03:10:58.353794  639611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:10:58.353806  639611 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:10:58.353903  639611 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-438041:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:10:58.184837  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.389817981s)
	I1124 03:10:58.184869  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1124 03:10:58.184909  631782 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:10:58.184953  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:11:00.135230  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:11:00.135263  636397 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-993813"
	I1124 03:11:00.135337  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.156666  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:00.157040  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:11:00.157061  636397 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993813 && echo "default-k8s-diff-port-993813" | sudo tee /etc/hostname
	I1124 03:11:00.317337  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:11:00.317424  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.338575  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:00.338824  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:11:00.338843  636397 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993813/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:11:00.487669  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:11:00.487698  636397 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:11:00.487736  636397 ubuntu.go:190] setting up certificates
	I1124 03:11:00.487751  636397 provision.go:84] configureAuth start
	I1124 03:11:00.487815  636397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:11:00.511564  636397 provision.go:143] copyHostCerts
	I1124 03:11:00.511630  636397 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:11:00.511666  636397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:11:00.511735  636397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:11:00.514009  636397 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:11:00.514030  636397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:11:00.514075  636397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:11:00.514159  636397 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:11:00.514167  636397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:11:00.514200  636397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:11:00.514270  636397 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993813 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-993813 localhost minikube]
	I1124 03:11:00.658058  636397 provision.go:177] copyRemoteCerts
	I1124 03:11:00.658133  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:11:00.658198  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.678015  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:00.787811  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:11:00.908237  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 03:11:00.926667  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:11:00.945146  636397 provision.go:87] duration metric: took 457.380171ms to configureAuth
	I1124 03:11:00.945175  636397 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:11:00.945368  636397 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:00.945497  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.963523  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:00.963843  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:11:00.963867  636397 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:11:01.528016  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:11:01.528042  636397 machine.go:97] duration metric: took 4.563106275s to provisionDockerMachine
	I1124 03:11:01.528055  636397 client.go:176] duration metric: took 12.433514854s to LocalClient.Create
	I1124 03:11:01.528076  636397 start.go:167] duration metric: took 12.433610792s to libmachine.API.Create "default-k8s-diff-port-993813"
	I1124 03:11:01.528087  636397 start.go:293] postStartSetup for "default-k8s-diff-port-993813" (driver="docker")
	I1124 03:11:01.528107  636397 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:11:01.528192  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:11:01.528250  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:01.550426  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:01.725783  636397 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:11:01.731121  636397 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:11:01.731156  636397 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:11:01.731171  636397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:11:01.731245  636397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:11:01.731344  636397 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:11:01.731461  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:11:01.741273  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:02.020513  636397 start.go:296] duration metric: took 492.40359ms for postStartSetup
	I1124 03:11:02.119944  636397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:11:02.137546  636397 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/config.json ...
	I1124 03:11:02.185355  636397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:11:02.185405  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:02.201426  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:02.297393  636397 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:11:02.302398  636397 start.go:128] duration metric: took 13.210072434s to createHost
	I1124 03:11:02.302422  636397 start.go:83] releasing machines lock for "default-k8s-diff-port-993813", held for 13.210223546s
	I1124 03:11:02.302502  636397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:11:02.319872  636397 ssh_runner.go:195] Run: cat /version.json
	I1124 03:11:02.319913  636397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:11:02.319948  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:02.319995  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:02.340353  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:02.340353  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:02.486835  636397 ssh_runner.go:195] Run: systemctl --version
	I1124 03:11:02.493433  636397 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:11:02.533294  636397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:11:02.538557  636397 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:11:02.538616  636397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:11:02.908750  636397 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:11:02.908778  636397 start.go:496] detecting cgroup driver to use...
	I1124 03:11:02.908812  636397 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:11:02.908861  636397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:11:02.925941  636397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:11:02.941046  636397 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:11:02.941102  636397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:11:02.959121  636397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:11:02.975801  636397 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:11:03.054110  636397 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:11:03.174491  636397 docker.go:234] disabling docker service ...
	I1124 03:11:03.174560  636397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:11:03.193664  636397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:11:03.207203  636397 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:11:03.340321  636397 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:11:03.515878  636397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:11:03.529161  636397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:11:03.543103  636397 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:11:03.543166  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.604968  636397 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:11:03.605035  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.624611  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.645648  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.689119  636397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:11:03.698440  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.783084  636397 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:02.234544  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:02.735113  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:03.234728  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:03.735125  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:03.823251  623347 kubeadm.go:1114] duration metric: took 11.180431183s to wait for elevateKubeSystemPrivileges
	I1124 03:11:03.823284  623347 kubeadm.go:403] duration metric: took 22.234422884s to StartCluster
	I1124 03:11:03.823307  623347 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:03.823374  623347 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:03.824432  623347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:03.824684  623347 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:03.824740  623347 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:03.824845  623347 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-579951"
	I1124 03:11:03.824727  623347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:03.824906  623347 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-579951"
	I1124 03:11:03.824917  623347 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:11:03.824923  623347 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-579951"
	I1124 03:11:03.824900  623347 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-579951"
	I1124 03:11:03.825024  623347 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:11:03.825377  623347 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:03.825590  623347 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:03.826953  623347 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:03.828395  623347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:03.862253  623347 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-579951"
	I1124 03:11:03.862302  623347 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:11:03.862810  623347 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:03.864365  623347 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:03.807318  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.820946  636397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:11:03.839099  636397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:11:03.853603  636397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:04.008696  636397 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:11:04.280958  636397 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:11:04.281140  636397 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:11:04.287138  636397 start.go:564] Will wait 60s for crictl version
	I1124 03:11:04.287195  636397 ssh_runner.go:195] Run: which crictl
	I1124 03:11:04.296400  636397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:11:04.343627  636397 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:11:04.343993  636397 ssh_runner.go:195] Run: crio --version
	I1124 03:11:04.389849  636397 ssh_runner.go:195] Run: crio --version
	I1124 03:11:04.426944  636397 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:11:03.866933  623347 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:03.866992  623347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:03.867050  623347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:03.908181  623347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:03.911219  623347 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:03.911443  623347 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:03.911619  623347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:03.949048  623347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:03.966864  623347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:04.039230  623347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:04.056821  623347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:04.079844  623347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:04.252855  623347 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:04.253835  623347 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-579951" to be "Ready" ...
	I1124 03:11:04.604404  623347 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:04.605457  623347 addons.go:530] duration metric: took 780.71049ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:04.763969  623347 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-579951" context rescaled to 1 replicas
	W1124 03:11:06.257869  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	I1124 03:11:03.812979  639611 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-438041:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (5.459016714s)
	I1124 03:11:03.813017  639611 kic.go:203] duration metric: took 5.459207202s to extract preloaded images to volume ...
	W1124 03:11:03.813173  639611 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:11:03.813255  639611 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:11:03.813304  639611 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:11:03.930433  639611 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-438041 --name newest-cni-438041 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-438041 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-438041 --network newest-cni-438041 --ip 192.168.94.2 --volume newest-cni-438041:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:11:04.484106  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Running}}
	I1124 03:11:04.506492  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:04.527784  639611 cli_runner.go:164] Run: docker exec newest-cni-438041 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:11:04.586541  639611 oci.go:144] the created container "newest-cni-438041" has a running status.
	I1124 03:11:04.586577  639611 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa...
	I1124 03:11:04.720361  639611 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:11:04.758530  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:04.794751  639611 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:11:04.794778  639611 kic_runner.go:114] Args: [docker exec --privileged newest-cni-438041 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:11:04.848966  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:04.868444  639611 machine.go:94] provisionDockerMachine start ...
	I1124 03:11:04.868542  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:04.886704  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:04.887098  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:04.887115  639611 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:11:04.887825  639611 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60056->127.0.0.1:33473: read: connection reset by peer
	I1124 03:11:03.698009  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.513031284s)
	I1124 03:11:03.698036  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 03:11:03.698072  631782 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:11:03.698135  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:11:04.540749  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 03:11:04.540878  631782 cache_images.go:125] Successfully loaded all cached images
	I1124 03:11:04.540962  631782 cache_images.go:94] duration metric: took 16.632965714s to LoadCachedImages
	I1124 03:11:04.540998  631782 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 03:11:04.541478  631782 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-603010 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:04.541629  631782 ssh_runner.go:195] Run: crio config
	I1124 03:11:04.613074  631782 cni.go:84] Creating CNI manager for ""
	I1124 03:11:04.613101  631782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:04.613135  631782 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:11:04.613165  631782 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603010 NodeName:no-preload-603010 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:04.613332  631782 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603010"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:04.613410  631782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:04.624805  631782 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 03:11:04.624880  631782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:04.636504  631782 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1124 03:11:04.636570  631782 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1124 03:11:04.636598  631782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 03:11:04.637106  631782 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1124 03:11:04.641001  631782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 03:11:04.641031  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1124 03:11:05.924351  631782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:11:05.942273  631782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 03:11:05.947268  631782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 03:11:05.947299  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1124 03:11:06.319700  631782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 03:11:06.328312  631782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 03:11:06.328362  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1124 03:11:06.576699  631782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:06.584640  631782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:11:06.596881  631782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:06.706372  631782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1124 03:11:06.725651  631782 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:06.731312  631782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:06.856376  631782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:06.964324  631782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:06.983343  631782 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010 for IP: 192.168.85.2
	I1124 03:11:06.983368  631782 certs.go:195] generating shared ca certs ...
	I1124 03:11:06.983389  631782 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:06.983554  631782 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:06.983623  631782 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:06.983638  631782 certs.go:257] generating profile certs ...
	I1124 03:11:06.983713  631782 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key
	I1124 03:11:06.983731  631782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.crt with IP's: []
	I1124 03:11:07.236879  631782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.crt ...
	I1124 03:11:07.236911  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.crt: {Name:mk2d55635da2a9326437d41d4577da0fe14409fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.237058  631782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key ...
	I1124 03:11:07.237070  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key: {Name:mkaa577d5c9ee92828884715bd0dda9017fc9779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.237153  631782 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738
	I1124 03:11:07.237166  631782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 03:11:07.327953  631782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738 ...
	I1124 03:11:07.327981  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738: {Name:mk8a9cae6d8e3a4cc6d6140e38080bb869e23acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.328138  631782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738 ...
	I1124 03:11:07.328156  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738: {Name:mkbf13b81ddaf24f4938052522adb9836ef8e1d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.328261  631782 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt
	I1124 03:11:07.328354  631782 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key
	I1124 03:11:07.328436  631782 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key
	I1124 03:11:07.328458  631782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt with IP's: []
	I1124 03:11:07.358779  631782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt ...
	I1124 03:11:07.358798  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt: {Name:mk394a0184e993e66f37c39d12264673ee1326c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.358929  631782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key ...
	I1124 03:11:07.358944  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key: {Name:mkf0922c5b9c127348bd0d94fa6adc983ccc147a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.359146  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:07.359197  631782 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:07.359210  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:07.359245  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:07.359288  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:07.359324  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:07.359391  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:07.360046  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:07.377802  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:07.394719  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:07.411226  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:07.427651  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:11:07.443818  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:11:07.461178  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:07.477210  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:11:07.493639  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:07.511874  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:07.528421  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:07.544763  631782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:07.557346  631782 ssh_runner.go:195] Run: openssl version
	I1124 03:11:07.563499  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:07.571402  631782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:07.574952  631782 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:07.575004  631782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:07.608612  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:07.616619  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:07.624657  631782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:07.628272  631782 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:07.628318  631782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:07.662522  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:07.670558  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:07.678360  631782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:07.681796  631782 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:07.681850  631782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:07.715936  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:07.723734  631782 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:07.727008  631782 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:11:07.727066  631782 kubeadm.go:401] StartCluster: {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:07.727159  631782 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:07.727200  631782 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:07.757836  631782 cri.go:89] found id: ""
	I1124 03:11:07.757930  631782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:07.767026  631782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:11:07.775281  631782 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:11:07.775329  631782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:11:07.782944  631782 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:11:07.782960  631782 kubeadm.go:158] found existing configuration files:
	
	I1124 03:11:07.782996  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:11:07.790173  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:11:07.790211  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:11:07.797407  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:11:07.804469  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:11:07.804513  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:11:07.811339  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:11:07.818449  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:11:07.818485  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:11:07.825301  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:11:07.832368  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:11:07.832409  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:11:07.839105  631782 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:11:07.875134  631782 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:11:07.875186  631782 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:11:07.899771  631782 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:11:07.899860  631782 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:11:07.899936  631782 kubeadm.go:319] OS: Linux
	I1124 03:11:07.900023  631782 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:11:07.900109  631782 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:11:07.900181  631782 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:11:07.900246  631782 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:11:07.900310  631782 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:11:07.900374  631782 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:11:07.900436  631782 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:11:07.900489  631782 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:11:07.966533  631782 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:11:07.966689  631782 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:11:07.966849  631782 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:11:07.981358  631782 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:11:04.428062  636397 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:11:04.452862  636397 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:11:04.458281  636397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:04.471103  636397 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:11:04.471281  636397 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:11:04.471346  636397 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:04.523060  636397 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:04.523089  636397 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:11:04.523147  636397 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:04.562653  636397 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:04.562684  636397 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:11:04.562695  636397 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 03:11:04.562806  636397 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:04.562939  636397 ssh_runner.go:195] Run: crio config
	I1124 03:11:04.638357  636397 cni.go:84] Creating CNI manager for ""
	I1124 03:11:04.638382  636397 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:04.638402  636397 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:11:04.638430  636397 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993813 NodeName:default-k8s-diff-port-993813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:04.638602  636397 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993813"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:04.638670  636397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:04.649639  636397 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:11:04.649707  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:04.665638  636397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 03:11:04.685753  636397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:04.706728  636397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 03:11:04.727449  636397 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:04.732474  636397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:04.750204  636397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:04.878850  636397 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:04.905254  636397 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813 for IP: 192.168.76.2
	I1124 03:11:04.905269  636397 certs.go:195] generating shared ca certs ...
	I1124 03:11:04.905285  636397 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:04.905416  636397 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:04.905456  636397 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:04.905465  636397 certs.go:257] generating profile certs ...
	I1124 03:11:04.905521  636397 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key
	I1124 03:11:04.905533  636397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.crt with IP's: []
	I1124 03:11:05.049206  636397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.crt ...
	I1124 03:11:05.049242  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.crt: {Name:mk818bd7c5f4a63b56241a5f5b815a5c96f8af6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.049427  636397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key ...
	I1124 03:11:05.049453  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key: {Name:mkb83de72d7be9aac5a3b6d7ffec3016949857c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.049582  636397 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619
	I1124 03:11:05.049600  636397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 03:11:05.290005  636397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619 ...
	I1124 03:11:05.290086  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619: {Name:mkbe37296015109a5ee861e9a87e29d9440c243c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.290281  636397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619 ...
	I1124 03:11:05.290300  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619: {Name:mk596e1b3db31f58cc0b8eb40ec231f070ee1f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.290403  636397 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt
	I1124 03:11:05.290503  636397 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key
	I1124 03:11:05.290584  636397 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key
	I1124 03:11:05.290607  636397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt with IP's: []
	I1124 03:11:05.405376  636397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt ...
	I1124 03:11:05.405411  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt: {Name:mk5c1d3bc48ab0dc1254aae88b7ec32711e77a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.405578  636397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key ...
	I1124 03:11:05.405599  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key: {Name:mk42df1886b091d28840c422e5e20c0f8c4e5569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.405873  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:05.405948  636397 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:05.405959  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:05.406001  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:05.406031  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:05.406059  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:05.406113  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:05.406989  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:05.434254  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:05.460107  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:05.485830  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:05.511902  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 03:11:05.535282  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:11:05.558610  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:05.579558  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:11:05.598340  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:05.620622  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:05.644303  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:05.667291  636397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:05.681732  636397 ssh_runner.go:195] Run: openssl version
	I1124 03:11:05.689816  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:05.701038  636397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:05.705646  636397 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:05.705699  636397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:05.763638  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:05.776210  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:05.789125  636397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:05.794258  636397 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:05.794315  636397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:05.853631  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:05.886140  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:05.898078  636397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:05.902187  636397 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:05.902252  636397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:06.009788  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:06.034772  636397 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:06.040075  636397 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:11:06.040136  636397 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:06.040285  636397 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:06.040340  636397 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:06.076603  636397 cri.go:89] found id: ""
	I1124 03:11:06.076664  636397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:06.084730  636397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:11:06.096161  636397 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:11:06.096213  636397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:11:06.104666  636397 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:11:06.104687  636397 kubeadm.go:158] found existing configuration files:
	
	I1124 03:11:06.104736  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1124 03:11:06.112142  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:11:06.112188  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:11:06.119278  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1124 03:11:06.126557  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:11:06.126604  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:11:06.133611  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1124 03:11:06.141319  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:11:06.141384  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:11:06.151450  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1124 03:11:06.162299  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:11:06.162489  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:11:06.173268  636397 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:11:06.365493  636397 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:11:06.445191  636397 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:11:08.034430  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-438041
	
	I1124 03:11:08.034458  639611 ubuntu.go:182] provisioning hostname "newest-cni-438041"
	I1124 03:11:08.034525  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.053306  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:08.053556  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:08.053570  639611 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-438041 && echo "newest-cni-438041" | sudo tee /etc/hostname
	I1124 03:11:08.201604  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-438041
	
	I1124 03:11:08.201678  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.220581  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:08.220950  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:08.220977  639611 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-438041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-438041/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-438041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:11:08.358818  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:11:08.358853  639611 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:11:08.358877  639611 ubuntu.go:190] setting up certificates
	I1124 03:11:08.358902  639611 provision.go:84] configureAuth start
	I1124 03:11:08.358979  639611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:08.377513  639611 provision.go:143] copyHostCerts
	I1124 03:11:08.377573  639611 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:11:08.377584  639611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:11:08.377654  639611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:11:08.377742  639611 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:11:08.377752  639611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:11:08.377785  639611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:11:08.377851  639611 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:11:08.377860  639611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:11:08.377905  639611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:11:08.378033  639611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.newest-cni-438041 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-438041]
	I1124 03:11:08.493906  639611 provision.go:177] copyRemoteCerts
	I1124 03:11:08.493995  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:11:08.494042  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.512353  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:08.611703  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:11:08.635092  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:11:08.653622  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:11:08.675705  639611 provision.go:87] duration metric: took 316.785216ms to configureAuth
	I1124 03:11:08.675736  639611 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:11:08.676005  639611 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:08.676156  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.697718  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:08.698047  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:08.698069  639611 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:11:08.991292  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:11:08.991321  639611 machine.go:97] duration metric: took 4.122852164s to provisionDockerMachine
	I1124 03:11:08.991334  639611 client.go:176] duration metric: took 11.662821141s to LocalClient.Create
	I1124 03:11:08.991367  639611 start.go:167] duration metric: took 11.662898329s to libmachine.API.Create "newest-cni-438041"
	I1124 03:11:08.991381  639611 start.go:293] postStartSetup for "newest-cni-438041" (driver="docker")
	I1124 03:11:08.991395  639611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:11:08.991454  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:11:08.991515  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.009958  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.110159  639611 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:11:09.113555  639611 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:11:09.113584  639611 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:11:09.113597  639611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:11:09.113650  639611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:11:09.113762  639611 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:11:09.113944  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:11:09.121410  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:09.140617  639611 start.go:296] duration metric: took 149.222262ms for postStartSetup
	I1124 03:11:09.141052  639611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:09.158606  639611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json ...
	I1124 03:11:09.158846  639611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:11:09.158906  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.176052  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.271931  639611 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:11:09.276348  639611 start.go:128] duration metric: took 11.950609978s to createHost
	I1124 03:11:09.276376  639611 start.go:83] releasing machines lock for "newest-cni-438041", held for 11.950766604s
	I1124 03:11:09.276440  639611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:09.294908  639611 ssh_runner.go:195] Run: cat /version.json
	I1124 03:11:09.294952  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.294957  639611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:11:09.295031  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.313079  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.314881  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.408772  639611 ssh_runner.go:195] Run: systemctl --version
	I1124 03:11:09.469031  639611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:11:09.504409  639611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:11:09.508820  639611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:11:09.508877  639611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:11:09.533917  639611 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:11:09.533945  639611 start.go:496] detecting cgroup driver to use...
	I1124 03:11:09.533978  639611 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:11:09.534024  639611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:11:09.550223  639611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:11:09.561378  639611 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:11:09.561431  639611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:11:09.576700  639611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:11:09.592718  639611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:11:09.686327  639611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:11:09.778323  639611 docker.go:234] disabling docker service ...
	I1124 03:11:09.778388  639611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:11:09.797725  639611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:11:09.809981  639611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:11:09.897574  639611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:11:09.981763  639611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:11:09.993604  639611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:11:10.008039  639611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:11:10.008088  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.017807  639611 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:11:10.017915  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.026036  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.034318  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.042375  639611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:11:10.050115  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.058198  639611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.071036  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.079079  639611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:11:10.085901  639611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:11:10.092631  639611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:10.187290  639611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:11:10.321446  639611 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:11:10.321516  639611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:11:10.325320  639611 start.go:564] Will wait 60s for crictl version
	I1124 03:11:10.325377  639611 ssh_runner.go:195] Run: which crictl
	I1124 03:11:10.328940  639611 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:11:10.355782  639611 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:11:10.355854  639611 ssh_runner.go:195] Run: crio --version
	I1124 03:11:10.386668  639611 ssh_runner.go:195] Run: crio --version
	I1124 03:11:10.419997  639611 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:11:10.421239  639611 cli_runner.go:164] Run: docker network inspect newest-cni-438041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:11:10.440078  639611 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:11:10.443982  639611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:10.455537  639611 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 03:11:10.456654  639611 kubeadm.go:884] updating cluster {Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:11:10.456815  639611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:11:10.456863  639611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:10.490472  639611 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:10.490492  639611 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:11:10.490540  639611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:10.519699  639611 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:10.519720  639611 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:11:10.519729  639611 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:11:10.519828  639611 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-438041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:10.519912  639611 ssh_runner.go:195] Run: crio config
	I1124 03:11:10.565191  639611 cni.go:84] Creating CNI manager for ""
	I1124 03:11:10.565215  639611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:10.565239  639611 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 03:11:10.565270  639611 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-438041 NodeName:newest-cni-438041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:10.565418  639611 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-438041"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:10.565482  639611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:10.573438  639611 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:11:10.573499  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:10.581224  639611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:11:10.593276  639611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:10.607346  639611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1124 03:11:10.619134  639611 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:10.622475  639611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:10.631680  639611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:10.724670  639611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:10.750283  639611 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041 for IP: 192.168.94.2
	I1124 03:11:10.750306  639611 certs.go:195] generating shared ca certs ...
	I1124 03:11:10.750339  639611 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:10.750511  639611 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:10.750555  639611 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:10.750565  639611 certs.go:257] generating profile certs ...
	I1124 03:11:10.750620  639611 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.key
	I1124 03:11:10.750633  639611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.crt with IP's: []
	I1124 03:11:10.920017  639611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.crt ...
	I1124 03:11:10.920047  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.crt: {Name:mkfd139af0a71cd4698b8ff5b3e638153eeb0dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:10.920228  639611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.key ...
	I1124 03:11:10.920243  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.key: {Name:mke75272685634ebc2912579601c6ca7cb4478b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:10.920357  639611 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183
	I1124 03:11:10.920374  639611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 03:11:11.156793  639611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183 ...
	I1124 03:11:11.156820  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183: {Name:mke55e2e412acbf5b903a8d8b4a7d2880f9fbe7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.157004  639611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183 ...
	I1124 03:11:11.157022  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183: {Name:mkad44470d73de35f2d3ae6d5e6d61417cfe11c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.157103  639611 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt
	I1124 03:11:11.157202  639611 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key
	I1124 03:11:11.157264  639611 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key
	I1124 03:11:11.157285  639611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt with IP's: []
	I1124 03:11:11.183331  639611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt ...
	I1124 03:11:11.183357  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt: {Name:mkaf061d70fce7922fd95db6d82ac8186d66239f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.183478  639611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key ...
	I1124 03:11:11.183490  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key: {Name:mk44940b01cb7f629207bffeb036b8a7e5d40814 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.183656  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:11.183693  639611 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:11.183702  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:11.183724  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:11.183746  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:11.183768  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:11.183810  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:11.184490  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:11.202414  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:11.218915  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:11.235233  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:11.251127  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:11:11.267814  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:11:11.284563  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:11.300790  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:11:11.316788  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:11.334413  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:11.350424  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:11.366533  639611 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:11.378365  639611 ssh_runner.go:195] Run: openssl version
	I1124 03:11:11.384126  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:11.391937  639611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:11.395429  639611 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:11.395475  639611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:11.428268  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:11.435958  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:11.443551  639611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:11.446861  639611 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:11.446917  639611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:11.480561  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:11.488521  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:11.496317  639611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:11.499903  639611 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:11.500486  639611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:11.534970  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:11.542760  639611 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:11.546025  639611 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:11:11.546084  639611 kubeadm.go:401] StartCluster: {Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:11.546189  639611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:11.546235  639611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:11.573079  639611 cri.go:89] found id: ""
	I1124 03:11:11.573143  639611 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:11.580989  639611 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:11:11.588193  639611 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:11:11.588243  639611 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:11:11.595578  639611 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:11:11.595596  639611 kubeadm.go:158] found existing configuration files:
	
	I1124 03:11:11.595632  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:11:11.602806  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:11:11.602846  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:11:11.609710  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:11:11.617281  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:11:11.617327  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:11:11.624606  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:11:11.631999  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:11:11.632041  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:11:11.640350  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:11:11.648359  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:11:11.648402  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:11:11.656826  639611 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:11:11.705613  639611 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:11:11.705684  639611 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:11:11.726192  639611 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:11:11.726285  639611 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:11:11.726340  639611 kubeadm.go:319] OS: Linux
	I1124 03:11:11.726397  639611 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:11:11.726461  639611 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:11:11.726524  639611 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:11:11.726587  639611 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:11:11.726686  639611 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:11:11.726790  639611 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:11:11.726861  639611 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:11:11.726943  639611 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:11:11.786505  639611 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:11:11.786613  639611 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:11:11.786747  639611 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:11:11.794629  639611 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1124 03:11:08.757098  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	W1124 03:11:10.757264  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	I1124 03:11:11.798699  639611 out.go:252]   - Generating certificates and keys ...
	I1124 03:11:11.798797  639611 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:11:11.798912  639611 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:11:11.963263  639611 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:11:12.107595  639611 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:11:07.983375  631782 out.go:252]   - Generating certificates and keys ...
	I1124 03:11:07.983499  631782 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:11:07.983606  631782 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:11:09.010428  631782 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:11:09.257194  631782 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:11:09.494535  631782 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:11:09.716956  631782 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:11:09.775865  631782 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:11:09.776099  631782 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-603010] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:11:10.030969  631782 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:11:10.031162  631782 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-603010] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:11:10.290289  631782 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:11:10.445776  631782 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:11:10.719700  631782 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:11:10.719788  631782 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:11:10.954056  631782 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:11:11.224490  631782 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:11:11.470938  631782 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:11:11.927378  631782 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:11:12.303932  631782 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:11:12.304513  631782 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:11:12.307975  631782 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:11:12.309284  631782 out.go:252]   - Booting up control plane ...
	I1124 03:11:12.309381  631782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:11:12.309465  631782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:11:12.310009  631782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:11:12.339837  631782 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:11:12.340003  631782 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:11:12.347388  631782 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:11:12.347620  631782 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:11:12.347698  631782 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:11:12.466844  631782 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:11:12.466970  631782 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:11:12.233009  639611 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:11:12.451335  639611 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:11:12.593355  639611 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:11:12.593574  639611 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-438041] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:11:13.275810  639611 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:11:13.276017  639611 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-438041] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:11:14.145354  639611 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:11:14.614138  639611 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:11:14.941086  639611 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:11:14.941227  639611 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:11:15.058919  639611 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:11:15.267378  639611 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:11:15.939232  639611 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:11:16.257592  639611 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:11:16.635822  639611 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:11:16.636485  639611 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:11:16.640110  639611 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1124 03:11:13.256972  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	W1124 03:11:15.259252  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	I1124 03:11:12.968700  631782 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.726277ms
	I1124 03:11:12.972359  631782 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:11:12.972498  631782 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 03:11:12.972634  631782 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:11:12.972778  631782 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:11:15.168823  631782 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.194903045s
	I1124 03:11:15.395212  631782 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.422782586s
	I1124 03:11:16.974533  631782 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002117874s
	I1124 03:11:16.990327  631782 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:11:17.001157  631782 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:11:17.009558  631782 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:11:17.009832  631782 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-603010 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:11:17.017079  631782 kubeadm.go:319] [bootstrap-token] Using token: qixyjy.v1lkfw8d9c2mcnrf
	I1124 03:11:16.641561  639611 out.go:252]   - Booting up control plane ...
	I1124 03:11:16.641675  639611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:11:16.641789  639611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:11:16.642679  639611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:11:16.660968  639611 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:11:16.661101  639611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:11:16.668686  639611 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:11:16.669004  639611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:11:16.669064  639611 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:11:16.793748  639611 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:11:16.793925  639611 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:11:17.712301  636397 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:11:17.712380  636397 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:11:17.712515  636397 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:11:17.712609  636397 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:11:17.712667  636397 kubeadm.go:319] OS: Linux
	I1124 03:11:17.712717  636397 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:11:17.712772  636397 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:11:17.712846  636397 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:11:17.712998  636397 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:11:17.713081  636397 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:11:17.713158  636397 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:11:17.713228  636397 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:11:17.713298  636397 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:11:17.713410  636397 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:11:17.713559  636397 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:11:17.713706  636397 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:11:17.713767  636397 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:11:17.715195  636397 out.go:252]   - Generating certificates and keys ...
	I1124 03:11:17.715298  636397 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:11:17.715442  636397 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:11:17.715523  636397 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:11:17.715597  636397 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:11:17.715657  636397 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:11:17.715733  636397 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:11:17.715822  636397 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:11:17.716053  636397 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-993813 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:11:17.716134  636397 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:11:17.716334  636397 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-993813 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:11:17.716443  636397 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:11:17.716537  636397 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:11:17.716600  636397 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:11:17.716682  636397 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:11:17.716772  636397 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:11:17.716823  636397 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:11:17.716938  636397 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:11:17.717053  636397 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:11:17.717141  636397 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:11:17.717221  636397 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:11:17.717295  636397 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:11:17.718959  636397 out.go:252]   - Booting up control plane ...
	I1124 03:11:17.719049  636397 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:11:17.719135  636397 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:11:17.719219  636397 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:11:17.719341  636397 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:11:17.719462  636397 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:11:17.719560  636397 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:11:17.719632  636397 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:11:17.719681  636397 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:11:17.719830  636397 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:11:17.719976  636397 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:11:17.720049  636397 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501467711s
	I1124 03:11:17.720160  636397 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:11:17.720268  636397 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1124 03:11:17.720406  636397 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:11:17.720513  636397 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:11:17.720614  636397 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.599087563s
	I1124 03:11:17.720742  636397 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.501028525s
	I1124 03:11:17.720844  636397 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00179766s
	I1124 03:11:17.721018  636397 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:11:17.721192  636397 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:11:17.721298  636397 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:11:17.721558  636397 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-993813 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:11:17.721622  636397 kubeadm.go:319] [bootstrap-token] Using token: q5wdgj.p9bwnkl5amhf01kb
	I1124 03:11:17.722776  636397 out.go:252]   - Configuring RBAC rules ...
	I1124 03:11:17.722949  636397 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:11:17.723089  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:11:17.723273  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:11:17.723470  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:11:17.723636  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:11:17.723759  636397 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:11:17.723924  636397 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:11:17.723997  636397 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:11:17.724057  636397 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:11:17.724062  636397 kubeadm.go:319] 
	I1124 03:11:17.724140  636397 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:11:17.724145  636397 kubeadm.go:319] 
	I1124 03:11:17.724249  636397 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:11:17.724254  636397 kubeadm.go:319] 
	I1124 03:11:17.724288  636397 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:11:17.724365  636397 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:11:17.724429  636397 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:11:17.724434  636397 kubeadm.go:319] 
	I1124 03:11:17.724504  636397 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:11:17.724509  636397 kubeadm.go:319] 
	I1124 03:11:17.724570  636397 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:11:17.724576  636397 kubeadm.go:319] 
	I1124 03:11:17.724642  636397 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:11:17.724751  636397 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:11:17.724845  636397 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:11:17.724850  636397 kubeadm.go:319] 
	I1124 03:11:17.724962  636397 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:11:17.725053  636397 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:11:17.725058  636397 kubeadm.go:319] 
	I1124 03:11:17.725156  636397 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token q5wdgj.p9bwnkl5amhf01kb \
	I1124 03:11:17.725281  636397 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:11:17.725306  636397 kubeadm.go:319] 	--control-plane 
	I1124 03:11:17.725311  636397 kubeadm.go:319] 
	I1124 03:11:17.725412  636397 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:11:17.725417  636397 kubeadm.go:319] 
	I1124 03:11:17.725515  636397 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token q5wdgj.p9bwnkl5amhf01kb \
	I1124 03:11:17.725654  636397 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:11:17.725664  636397 cni.go:84] Creating CNI manager for ""
	I1124 03:11:17.725672  636397 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:17.727357  636397 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:11:17.018572  631782 out.go:252]   - Configuring RBAC rules ...
	I1124 03:11:17.018732  631782 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:11:17.021245  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:11:17.025919  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:11:17.028242  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:11:17.030590  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:11:17.032723  631782 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:11:17.380197  631782 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:11:17.802727  631782 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:11:18.381075  631782 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:11:18.382320  631782 kubeadm.go:319] 
	I1124 03:11:18.382408  631782 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:11:18.382416  631782 kubeadm.go:319] 
	I1124 03:11:18.382508  631782 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:11:18.382522  631782 kubeadm.go:319] 
	I1124 03:11:18.382554  631782 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:11:18.382630  631782 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:11:18.382704  631782 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:11:18.382712  631782 kubeadm.go:319] 
	I1124 03:11:18.382781  631782 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:11:18.382791  631782 kubeadm.go:319] 
	I1124 03:11:18.382850  631782 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:11:18.382859  631782 kubeadm.go:319] 
	I1124 03:11:18.382948  631782 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:11:18.383059  631782 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:11:18.383153  631782 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:11:18.383164  631782 kubeadm.go:319] 
	I1124 03:11:18.383265  631782 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:11:18.383360  631782 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:11:18.383370  631782 kubeadm.go:319] 
	I1124 03:11:18.383510  631782 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qixyjy.v1lkfw8d9c2mcnrf \
	I1124 03:11:18.383708  631782 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:11:18.383747  631782 kubeadm.go:319] 	--control-plane 
	I1124 03:11:18.383767  631782 kubeadm.go:319] 
	I1124 03:11:18.383880  631782 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:11:18.383909  631782 kubeadm.go:319] 
	I1124 03:11:18.384037  631782 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qixyjy.v1lkfw8d9c2mcnrf \
	I1124 03:11:18.384180  631782 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:11:18.387182  631782 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:11:18.387348  631782 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:11:18.387386  631782 cni.go:84] Creating CNI manager for ""
	I1124 03:11:18.387399  631782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:18.389706  631782 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:11:17.729080  636397 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:11:17.735280  636397 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:11:17.735299  636397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:11:17.750224  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:11:17.964488  636397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:11:17.964571  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:17.964583  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-993813 minikube.k8s.io/updated_at=2025_11_24T03_11_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=default-k8s-diff-port-993813 minikube.k8s.io/primary=true
	I1124 03:11:17.977541  636397 ops.go:34] apiserver oom_adj: -16
	I1124 03:11:18.089531  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:18.589931  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:17.757544  623347 node_ready.go:49] node "old-k8s-version-579951" is "Ready"
	I1124 03:11:17.757568  623347 node_ready.go:38] duration metric: took 13.503706583s for node "old-k8s-version-579951" to be "Ready" ...
	I1124 03:11:17.757591  623347 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:17.757632  623347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:11:17.769351  623347 api_server.go:72] duration metric: took 13.944624755s to wait for apiserver process to appear ...
	I1124 03:11:17.769381  623347 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:17.769404  623347 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 03:11:17.773486  623347 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 03:11:17.774606  623347 api_server.go:141] control plane version: v1.28.0
	I1124 03:11:17.774639  623347 api_server.go:131] duration metric: took 5.249615ms to wait for apiserver health ...
	I1124 03:11:17.774650  623347 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:17.778732  623347 system_pods.go:59] 8 kube-system pods found
	I1124 03:11:17.778769  623347 system_pods.go:61] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:17.778779  623347 system_pods.go:61] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:17.778787  623347 system_pods.go:61] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:17.778792  623347 system_pods.go:61] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:17.778797  623347 system_pods.go:61] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:17.778806  623347 system_pods.go:61] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:17.778810  623347 system_pods.go:61] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:17.778817  623347 system_pods.go:61] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:17.778824  623347 system_pods.go:74] duration metric: took 4.167214ms to wait for pod list to return data ...
	I1124 03:11:17.778835  623347 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:17.781411  623347 default_sa.go:45] found service account: "default"
	I1124 03:11:17.781435  623347 default_sa.go:55] duration metric: took 2.594162ms for default service account to be created ...
	I1124 03:11:17.781446  623347 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:11:17.784981  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:17.785018  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:17.785031  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:17.785044  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:17.785050  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:17.785061  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:17.785066  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:17.785076  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:17.785090  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:17.785127  623347 retry.go:31] will retry after 271.484184ms: missing components: kube-dns
	I1124 03:11:18.065194  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:18.065237  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:18.065248  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:18.065257  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:18.065263  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:18.065269  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:18.065274  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:18.065279  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:18.065287  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:18.065306  623347 retry.go:31] will retry after 388.018904ms: missing components: kube-dns
	I1124 03:11:18.457864  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:18.457936  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:18.457946  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:18.457961  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:18.457972  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:18.457978  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:18.457984  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:18.457991  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:18.457999  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:18.458022  623347 retry.go:31] will retry after 449.601826ms: missing components: kube-dns
	I1124 03:11:18.911831  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:18.911859  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Running
	I1124 03:11:18.911865  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:18.911869  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:18.911873  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:18.911877  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:18.911880  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:18.911916  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:18.911921  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Running
	I1124 03:11:18.911931  623347 system_pods.go:126] duration metric: took 1.130477915s to wait for k8s-apps to be running ...
	I1124 03:11:18.911944  623347 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:11:18.911996  623347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:11:18.925774  623347 system_svc.go:56] duration metric: took 13.819357ms WaitForService to wait for kubelet
	I1124 03:11:18.925804  623347 kubeadm.go:587] duration metric: took 15.101081639s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:11:18.925827  623347 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:18.928599  623347 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:18.928633  623347 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:18.928652  623347 node_conditions.go:105] duration metric: took 2.818338ms to run NodePressure ...
	I1124 03:11:18.928667  623347 start.go:242] waiting for startup goroutines ...
	I1124 03:11:18.928681  623347 start.go:247] waiting for cluster config update ...
	I1124 03:11:18.928701  623347 start.go:256] writing updated cluster config ...
	I1124 03:11:18.929049  623347 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:18.933285  623347 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:18.937686  623347 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.946299  623347 pod_ready.go:94] pod "coredns-5dd5756b68-5nwx9" is "Ready"
	I1124 03:11:18.946320  623347 pod_ready.go:86] duration metric: took 8.611977ms for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.950801  623347 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.960988  623347 pod_ready.go:94] pod "etcd-old-k8s-version-579951" is "Ready"
	I1124 03:11:18.961015  623347 pod_ready.go:86] duration metric: took 10.19455ms for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.965881  623347 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.974882  623347 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-579951" is "Ready"
	I1124 03:11:18.974933  623347 pod_ready.go:86] duration metric: took 9.016779ms for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.977770  623347 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:19.341020  623347 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-579951" is "Ready"
	I1124 03:11:19.341052  623347 pod_ready.go:86] duration metric: took 363.250058ms for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:19.538869  623347 pod_ready.go:83] waiting for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:19.937877  623347 pod_ready.go:94] pod "kube-proxy-r82jh" is "Ready"
	I1124 03:11:19.937925  623347 pod_ready.go:86] duration metric: took 399.001292ms for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:20.140275  623347 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:20.537761  623347 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-579951" is "Ready"
	I1124 03:11:20.537795  623347 pod_ready.go:86] duration metric: took 397.491187ms for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:20.537812  623347 pod_ready.go:40] duration metric: took 1.604492738s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:20.582109  623347 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 03:11:20.583699  623347 out.go:203] 
	W1124 03:11:20.584752  623347 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 03:11:20.585796  623347 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:11:20.587217  623347 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-579951" cluster and "default" namespace by default
	I1124 03:11:17.795245  639611 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001564938s
	I1124 03:11:17.799260  639611 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:11:17.799423  639611 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1124 03:11:17.799562  639611 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:11:17.799651  639611 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:11:20.070827  639611 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.271449475s
	I1124 03:11:20.290602  639611 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.491348646s
	I1124 03:11:21.801475  639611 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002149825s
	I1124 03:11:21.812595  639611 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:11:21.822553  639611 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:11:21.831169  639611 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:11:21.831446  639611 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-438041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:11:21.841628  639611 kubeadm.go:319] [bootstrap-token] Using token: yx8fea.c13myzzt6w383nef
	I1124 03:11:21.842995  639611 out.go:252]   - Configuring RBAC rules ...
	I1124 03:11:21.843145  639611 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:11:21.846076  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:11:21.851007  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:11:21.853367  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:11:21.856222  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:11:21.859271  639611 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:11:19.090574  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:19.589602  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.090576  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.590533  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.089866  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.589593  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.089582  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.590222  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.673854  636397 kubeadm.go:1114] duration metric: took 4.709348594s to wait for elevateKubeSystemPrivileges
	I1124 03:11:22.673908  636397 kubeadm.go:403] duration metric: took 16.63377865s to StartCluster
	I1124 03:11:22.673934  636397 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:22.674008  636397 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:22.675076  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:22.675302  636397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:22.675326  636397 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:22.675390  636397 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-993813"
	I1124 03:11:22.675304  636397 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:22.675418  636397 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-993813"
	I1124 03:11:22.675431  636397 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993813"
	I1124 03:11:22.675411  636397 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-993813"
	I1124 03:11:22.675530  636397 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:11:22.675536  636397 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:22.675814  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:11:22.676034  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:11:22.676852  636397 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:22.678754  636397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:22.703150  636397 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-993813"
	I1124 03:11:22.703198  636397 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:11:22.703676  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:11:22.704736  636397 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:18.390820  631782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:11:18.395615  631782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:11:18.395633  631782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:11:18.409234  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:11:18.710608  631782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:11:18.710754  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:18.710853  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-603010 minikube.k8s.io/updated_at=2025_11_24T03_11_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=no-preload-603010 minikube.k8s.io/primary=true
	I1124 03:11:18.818373  631782 ops.go:34] apiserver oom_adj: -16
	I1124 03:11:18.818465  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:19.318531  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:19.819135  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.319402  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.819441  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.319189  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.818604  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.319077  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.706096  636397 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:22.706117  636397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:22.706176  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:22.737283  636397 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:22.737304  636397 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:22.737370  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:22.740863  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:22.761473  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:22.778645  636397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:22.830555  636397 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:22.862561  636397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:22.876089  636397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:22.963053  636397 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:22.964307  636397 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:11:23.185636  636397 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:22.209953  639611 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:11:22.623609  639611 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:11:23.207075  639611 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:11:23.208086  639611 kubeadm.go:319] 
	I1124 03:11:23.208184  639611 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:11:23.208202  639611 kubeadm.go:319] 
	I1124 03:11:23.208296  639611 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:11:23.208304  639611 kubeadm.go:319] 
	I1124 03:11:23.208344  639611 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:11:23.208443  639611 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:11:23.208509  639611 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:11:23.208519  639611 kubeadm.go:319] 
	I1124 03:11:23.208591  639611 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:11:23.208601  639611 kubeadm.go:319] 
	I1124 03:11:23.208661  639611 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:11:23.208671  639611 kubeadm.go:319] 
	I1124 03:11:23.208771  639611 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:11:23.208934  639611 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:11:23.209014  639611 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:11:23.209021  639611 kubeadm.go:319] 
	I1124 03:11:23.209090  639611 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:11:23.209153  639611 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:11:23.209159  639611 kubeadm.go:319] 
	I1124 03:11:23.209225  639611 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yx8fea.c13myzzt6w383nef \
	I1124 03:11:23.209329  639611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:11:23.209368  639611 kubeadm.go:319] 	--control-plane 
	I1124 03:11:23.209382  639611 kubeadm.go:319] 
	I1124 03:11:23.209513  639611 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:11:23.209523  639611 kubeadm.go:319] 
	I1124 03:11:23.209667  639611 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yx8fea.c13myzzt6w383nef \
	I1124 03:11:23.209795  639611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:11:23.212372  639611 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:11:23.212472  639611 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:11:23.212489  639611 cni.go:84] Creating CNI manager for ""
	I1124 03:11:23.212498  639611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:23.213669  639611 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:11:22.819290  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.318726  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.413238  631782 kubeadm.go:1114] duration metric: took 4.702498844s to wait for elevateKubeSystemPrivileges
	I1124 03:11:23.413274  631782 kubeadm.go:403] duration metric: took 15.686211393s to StartCluster
	I1124 03:11:23.413298  631782 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:23.413374  631782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:23.415097  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:23.415455  631782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:23.415991  631782 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:23.416200  631782 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:23.416393  631782 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:23.416478  631782 addons.go:70] Setting storage-provisioner=true in profile "no-preload-603010"
	I1124 03:11:23.416515  631782 addons.go:239] Setting addon storage-provisioner=true in "no-preload-603010"
	I1124 03:11:23.416545  631782 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:11:23.416771  631782 addons.go:70] Setting default-storageclass=true in profile "no-preload-603010"
	I1124 03:11:23.416794  631782 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603010"
	I1124 03:11:23.417522  631782 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:11:23.418922  631782 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:11:23.420690  631782 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:23.422440  631782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:23.453170  631782 addons.go:239] Setting addon default-storageclass=true in "no-preload-603010"
	I1124 03:11:23.453315  631782 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:11:23.454249  631782 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:11:23.456721  631782 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:23.187200  636397 addons.go:530] duration metric: took 511.871879ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:23.468811  636397 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-993813" context rescaled to 1 replicas
	I1124 03:11:23.457832  631782 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:23.457852  631782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:23.457945  631782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:11:23.485040  631782 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:23.485073  631782 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:23.485135  631782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:11:23.488649  631782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:11:23.522776  631782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:11:23.578154  631782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:23.637057  631782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:23.642323  631782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:23.675165  631782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:23.795763  631782 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:23.982706  631782 node_ready.go:35] waiting up to 6m0s for node "no-preload-603010" to be "Ready" ...
	I1124 03:11:23.988365  631782 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:23.214606  639611 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:11:23.218969  639611 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:11:23.219002  639611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:11:23.233030  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:11:23.530587  639611 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:11:23.530753  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.530907  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-438041 minikube.k8s.io/updated_at=2025_11_24T03_11_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=newest-cni-438041 minikube.k8s.io/primary=true
	I1124 03:11:23.553306  639611 ops.go:34] apiserver oom_adj: -16
	I1124 03:11:23.638819  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:24.139560  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:24.639641  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:25.139273  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:25.638941  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:26.139461  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:26.638988  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.989407  631782 addons.go:530] duration metric: took 573.023057ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:24.300916  631782 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-603010" context rescaled to 1 replicas
	W1124 03:11:25.985432  631782 node_ready.go:57] node "no-preload-603010" has "Ready":"False" status (will retry)
	I1124 03:11:27.139734  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:27.639015  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:28.139551  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:28.207738  639611 kubeadm.go:1114] duration metric: took 4.677029552s to wait for elevateKubeSystemPrivileges
	I1124 03:11:28.207780  639611 kubeadm.go:403] duration metric: took 16.661698302s to StartCluster
	I1124 03:11:28.207804  639611 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:28.207878  639611 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:28.209479  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:28.209719  639611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:28.209737  639611 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:28.209814  639611 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:28.209929  639611 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-438041"
	I1124 03:11:28.209946  639611 addons.go:70] Setting default-storageclass=true in profile "newest-cni-438041"
	I1124 03:11:28.209971  639611 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-438041"
	I1124 03:11:28.209980  639611 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-438041"
	I1124 03:11:28.210010  639611 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:11:28.210056  639611 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:28.210387  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:28.210537  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:28.211106  639611 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:28.212323  639611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:28.233230  639611 addons.go:239] Setting addon default-storageclass=true in "newest-cni-438041"
	I1124 03:11:28.233278  639611 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:11:28.233850  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:28.234771  639611 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:28.235819  639611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:28.235861  639611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:28.235962  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:28.261133  639611 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:28.261156  639611 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:28.261334  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:28.267999  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:28.289398  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:28.299784  639611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:28.359817  639611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:28.384919  639611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:28.404504  639611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:28.491961  639611 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:28.493110  639611 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:28.493157  639611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1124 03:11:28.510848  639611 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "newest-cni-438041" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1124 03:11:28.510875  639611 start.go:161] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1124 03:11:28.701114  639611 api_server.go:72] duration metric: took 491.340672ms to wait for apiserver process to appear ...
	I1124 03:11:28.701143  639611 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:28.701166  639611 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:11:28.705994  639611 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:11:28.706754  639611 api_server.go:141] control plane version: v1.34.1
	I1124 03:11:28.706781  639611 api_server.go:131] duration metric: took 5.630796ms to wait for apiserver health ...
	I1124 03:11:28.706793  639611 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:28.709054  639611 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:28.709369  639611 system_pods.go:59] 9 kube-system pods found
	I1124 03:11:28.709395  639611 system_pods.go:61] "coredns-66bc5c9577-b5rlp" [ec3ad010-7694-4640-9638-fe6f5c97f56a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:28.709402  639611 system_pods.go:61] "coredns-66bc5c9577-mwvq8" [c8831e7f-34c0-40c7-a728-7f7882ed604a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:28.709411  639611 system_pods.go:61] "etcd-newest-cni-438041" [7acbb753-dfd2-4438-b370-a7e38c4fbc5d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:11:28.709418  639611 system_pods.go:61] "kindnet-xp46p" [19fa7668-24bd-454c-a5df-37534a06d3a5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:11:28.709423  639611 system_pods.go:61] "kube-apiserver-newest-cni-438041" [c7d90375-f6c0-4a1f-8b80-81574119b191] Running
	I1124 03:11:28.709432  639611 system_pods.go:61] "kube-controller-manager-newest-cni-438041" [54b144f6-6f26-4e9b-818b-cbb2d7b4c0a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:11:28.709437  639611 system_pods.go:61] "kube-proxy-n85pg" [86f875e2-7efc-4b60-b031-a1de71ea7502] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:11:28.709447  639611 system_pods.go:61] "kube-scheduler-newest-cni-438041" [75e99a3a-d4a9-4428-a52a-ef5ac4edc76c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:11:28.709457  639611 system_pods.go:61] "storage-provisioner" [9a94c2f7-e288-4528-b22c-f413d79bdf46] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:28.709467  639611 system_pods.go:74] duration metric: took 2.667768ms to wait for pod list to return data ...
	I1124 03:11:28.709481  639611 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:28.710153  639611 addons.go:530] duration metric: took 500.34824ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:28.711298  639611 default_sa.go:45] found service account: "default"
	I1124 03:11:28.711317  639611 default_sa.go:55] duration metric: took 1.826862ms for default service account to be created ...
	I1124 03:11:28.711328  639611 kubeadm.go:587] duration metric: took 501.561139ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 03:11:28.711341  639611 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:28.713171  639611 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:28.713192  639611 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:28.713206  639611 node_conditions.go:105] duration metric: took 1.86027ms to run NodePressure ...
	I1124 03:11:28.713217  639611 start.go:242] waiting for startup goroutines ...
	I1124 03:11:28.713224  639611 start.go:247] waiting for cluster config update ...
	I1124 03:11:28.713233  639611 start.go:256] writing updated cluster config ...
	I1124 03:11:28.713443  639611 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:28.759550  639611 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:11:28.760722  639611 out.go:179] * Done! kubectl is now configured to use "newest-cni-438041" cluster and "default" namespace by default
	W1124 03:11:24.968153  636397 node_ready.go:57] node "default-k8s-diff-port-993813" has "Ready":"False" status (will retry)
	W1124 03:11:27.467212  636397 node_ready.go:57] node "default-k8s-diff-port-993813" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 24 03:11:18 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:18.047811035Z" level=info msg="Starting container: 6ba5363c6ffe27687ebe1c297c847fceabce6fd8c18b9e09fa358f5cb35247e9" id=67fbc292-2348-4859-81de-52d2edda00f1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:11:18 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:18.052161505Z" level=info msg="Started container" PID=2108 containerID=6ba5363c6ffe27687ebe1c297c847fceabce6fd8c18b9e09fa358f5cb35247e9 description=kube-system/coredns-5dd5756b68-5nwx9/coredns id=67fbc292-2348-4859-81de-52d2edda00f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9d5fe7cee26fd67040dc4ea2c6c55ecc815babc81fc20791a033e1e2dcd38d04
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.033625238Z" level=info msg="Running pod sandbox: default/busybox/POD" id=744e79ff-7090-432d-8a11-dedc3e794838 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.033714039Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.038594492Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a1acf56e02477fef98fcdcd9af4128397ee8567f5e39c9520d62aa84ae5128a3 UID:b61ae335-3755-4f88-9305-030d7d7fd2e7 NetNS:/var/run/netns/fc9b290e-c354-48e7-a4a4-030ef67f23e7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128c40}] Aliases:map[]}"
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.03862057Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.053808048Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a1acf56e02477fef98fcdcd9af4128397ee8567f5e39c9520d62aa84ae5128a3 UID:b61ae335-3755-4f88-9305-030d7d7fd2e7 NetNS:/var/run/netns/fc9b290e-c354-48e7-a4a4-030ef67f23e7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000128c40}] Aliases:map[]}"
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.053974247Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.054664611Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.055624957Z" level=info msg="Ran pod sandbox a1acf56e02477fef98fcdcd9af4128397ee8567f5e39c9520d62aa84ae5128a3 with infra container: default/busybox/POD" id=744e79ff-7090-432d-8a11-dedc3e794838 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.056800687Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=69b98429-8607-46f9-88f0-9b57dff6a436 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.056960792Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=69b98429-8607-46f9-88f0-9b57dff6a436 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.057013704Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=69b98429-8607-46f9-88f0-9b57dff6a436 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.057536685Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=76eb5fd7-659c-418f-9a0d-b7947720e57a name=/runtime.v1.ImageService/PullImage
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.060912743Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.758255152Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=76eb5fd7-659c-418f-9a0d-b7947720e57a name=/runtime.v1.ImageService/PullImage
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.758915803Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=19c7a4c2-8c09-4801-990a-5d06ffdccaa9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.759726308Z" level=info msg="Creating container: default/busybox/busybox" id=277bd87e-af77-4874-a6e9-6171d6d305d7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.759850227Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.764438299Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.764826881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.796718774Z" level=info msg="Created container 57b924b69b60a0fc1fd253128037f4604004939d374495410d6c8965dbaf47b2: default/busybox/busybox" id=277bd87e-af77-4874-a6e9-6171d6d305d7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.797231127Z" level=info msg="Starting container: 57b924b69b60a0fc1fd253128037f4604004939d374495410d6c8965dbaf47b2" id=fb48a34d-b704-49a2-8fb7-d76d1e7c798d name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:11:21 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:21.798964022Z" level=info msg="Started container" PID=2186 containerID=57b924b69b60a0fc1fd253128037f4604004939d374495410d6c8965dbaf47b2 description=default/busybox/busybox id=fb48a34d-b704-49a2-8fb7-d76d1e7c798d name=/runtime.v1.RuntimeService/StartContainer sandboxID=a1acf56e02477fef98fcdcd9af4128397ee8567f5e39c9520d62aa84ae5128a3
	Nov 24 03:11:29 old-k8s-version-579951 crio[765]: time="2025-11-24T03:11:29.823259565Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	57b924b69b60a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   a1acf56e02477       busybox                                          default
	6ba5363c6ffe2       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 seconds ago      Running             coredns                   0                   9d5fe7cee26fd       coredns-5dd5756b68-5nwx9                         kube-system
	9c02787675e9a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   db62484ca07c1       storage-provisioner                              kube-system
	10be38f00ddd9       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    24 seconds ago      Running             kindnet-cni               0                   161e535e5276d       kindnet-gdpzl                                    kube-system
	b2c1d9af22990       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   9776f82f5472f       kube-proxy-r82jh                                 kube-system
	74e73db182bd9       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   75e4bf94db85f       kube-controller-manager-old-k8s-version-579951   kube-system
	34f2a5dfac542       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   fafde475e5a5c       etcd-old-k8s-version-579951                      kube-system
	5c45086ec9a95       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   a0e69dab4c3ab       kube-scheduler-old-k8s-version-579951            kube-system
	484e1d7b5c711       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   6124e7ff8189a       kube-apiserver-old-k8s-version-579951            kube-system
	
	
	==> coredns [6ba5363c6ffe27687ebe1c297c847fceabce6fd8c18b9e09fa358f5cb35247e9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:57050 - 4960 "HINFO IN 6511162228281916234.6604496643611031998. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.106118689s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-579951
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-579951
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=old-k8s-version-579951
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_10_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:10:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-579951
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:11:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:11:21 +0000   Mon, 24 Nov 2025 03:10:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:11:21 +0000   Mon, 24 Nov 2025 03:10:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:11:21 +0000   Mon, 24 Nov 2025 03:10:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:11:21 +0000   Mon, 24 Nov 2025 03:11:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-579951
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                5d61a30e-9821-4be7-b90f-0f413e931a19
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-5nwx9                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-old-k8s-version-579951                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-gdpzl                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-579951             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-579951    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-r82jh                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-579951             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s   kubelet          Node old-k8s-version-579951 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s   kubelet          Node old-k8s-version-579951 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s   kubelet          Node old-k8s-version-579951 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-579951 event: Registered Node old-k8s-version-579951 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-579951 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [34f2a5dfac542f0f078d96439a0ef74b8cfb81cc34b91c7d3a843304c353168c] <==
	{"level":"warn","ts":"2025-11-24T03:11:03.420049Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.738437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2025-11-24T03:11:03.42008Z","caller":"traceutil/trace.go:171","msg":"trace[877190948] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:296; }","duration":"221.773207ms","start":"2025-11-24T03:11:03.198298Z","end":"2025-11-24T03:11:03.420071Z","steps":["trace[877190948] 'agreement among raft nodes before linearized reading'  (duration: 221.703614ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:11:03.420095Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.956157ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller\" ","response":"range_response_count:1 size:234"}
	{"level":"warn","ts":"2025-11-24T03:11:03.420112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"224.261444ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"warn","ts":"2025-11-24T03:11:03.420113Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.935997ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/bootstrap-signer\" ","response":"range_response_count:1 size:197"}
	{"level":"info","ts":"2025-11-24T03:11:03.420125Z","caller":"traceutil/trace.go:171","msg":"trace[1319146881] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/clusterrole-aggregation-controller; range_end:; response_count:1; response_revision:296; }","duration":"238.989727ms","start":"2025-11-24T03:11:03.181126Z","end":"2025-11-24T03:11:03.420115Z","steps":["trace[1319146881] 'agreement among raft nodes before linearized reading'  (duration: 238.923458ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:11:03.420136Z","caller":"traceutil/trace.go:171","msg":"trace[2049631432] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:296; }","duration":"224.287017ms","start":"2025-11-24T03:11:03.195841Z","end":"2025-11-24T03:11:03.420129Z","steps":["trace[2049631432] 'agreement among raft nodes before linearized reading'  (duration: 224.236482ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:11:03.420153Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.537896ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T03:11:03.420156Z","caller":"traceutil/trace.go:171","msg":"trace[1702913212] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/bootstrap-signer; range_end:; response_count:1; response_revision:296; }","duration":"221.967192ms","start":"2025-11-24T03:11:03.198168Z","end":"2025-11-24T03:11:03.420135Z","steps":["trace[1702913212] 'agreement among raft nodes before linearized reading'  (duration: 221.895154ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:11:03.420174Z","caller":"traceutil/trace.go:171","msg":"trace[62107524] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:296; }","duration":"114.559201ms","start":"2025-11-24T03:11:03.305608Z","end":"2025-11-24T03:11:03.420167Z","steps":["trace[62107524] 'agreement among raft nodes before linearized reading'  (duration: 114.520185ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:11:03.420193Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.867748ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/daemon-set-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-24T03:11:03.420426Z","caller":"traceutil/trace.go:171","msg":"trace[116955963] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/daemon-set-controller; range_end:; response_count:1; response_revision:296; }","duration":"215.098492ms","start":"2025-11-24T03:11:03.205317Z","end":"2025-11-24T03:11:03.420416Z","steps":["trace[116955963] 'agreement among raft nodes before linearized reading'  (duration: 214.842979ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:11:03.623317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.748606ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2025-11-24T03:11:03.623552Z","caller":"traceutil/trace.go:171","msg":"trace[205566462] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:314; }","duration":"116.997ms","start":"2025-11-24T03:11:03.506542Z","end":"2025-11-24T03:11:03.623539Z","steps":["trace[205566462] 'agreement among raft nodes before linearized reading'  (duration: 116.675091ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:11:03.623428Z","caller":"traceutil/trace.go:171","msg":"trace[757019709] transaction","detail":"{read_only:false; response_revision:314; number_of_response:1; }","duration":"101.86806ms","start":"2025-11-24T03:11:03.521537Z","end":"2025-11-24T03:11:03.623405Z","steps":["trace[757019709] 'process raft request'  (duration: 101.559826ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:11:03.623493Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"115.963144ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:3684"}
	{"level":"info","ts":"2025-11-24T03:11:03.623871Z","caller":"traceutil/trace.go:171","msg":"trace[584631171] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:314; }","duration":"116.340268ms","start":"2025-11-24T03:11:03.507516Z","end":"2025-11-24T03:11:03.623856Z","steps":["trace[584631171] 'agreement among raft nodes before linearized reading'  (duration: 115.926732ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:11:03.783254Z","caller":"traceutil/trace.go:171","msg":"trace[278444189] transaction","detail":"{read_only:false; response_revision:319; number_of_response:1; }","duration":"156.159588ms","start":"2025-11-24T03:11:03.627069Z","end":"2025-11-24T03:11:03.783229Z","steps":["trace[278444189] 'process raft request'  (duration: 156.051223ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:11:03.78337Z","caller":"traceutil/trace.go:171","msg":"trace[1074242652] transaction","detail":"{read_only:false; response_revision:321; number_of_response:1; }","duration":"153.954636ms","start":"2025-11-24T03:11:03.629398Z","end":"2025-11-24T03:11:03.783353Z","steps":["trace[1074242652] 'process raft request'  (duration: 153.805289ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:11:03.78343Z","caller":"traceutil/trace.go:171","msg":"trace[2101398083] transaction","detail":"{read_only:false; response_revision:325; number_of_response:1; }","duration":"131.549611ms","start":"2025-11-24T03:11:03.651867Z","end":"2025-11-24T03:11:03.783416Z","steps":["trace[2101398083] 'process raft request'  (duration: 131.50375ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:11:03.783519Z","caller":"traceutil/trace.go:171","msg":"trace[1843437030] transaction","detail":"{read_only:false; response_revision:318; number_of_response:1; }","duration":"156.665894ms","start":"2025-11-24T03:11:03.626842Z","end":"2025-11-24T03:11:03.783507Z","steps":["trace[1843437030] 'process raft request'  (duration: 103.724758ms)","trace[1843437030] 'compare'  (duration: 52.439229ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:11:03.783426Z","caller":"traceutil/trace.go:171","msg":"trace[1944584596] transaction","detail":"{read_only:false; response_revision:320; number_of_response:1; }","duration":"155.344185ms","start":"2025-11-24T03:11:03.628037Z","end":"2025-11-24T03:11:03.783381Z","steps":["trace[1944584596] 'process raft request'  (duration: 155.133392ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:11:03.784114Z","caller":"traceutil/trace.go:171","msg":"trace[1231049520] transaction","detail":"{read_only:false; response_revision:323; number_of_response:1; }","duration":"133.932922ms","start":"2025-11-24T03:11:03.650169Z","end":"2025-11-24T03:11:03.784102Z","steps":["trace[1231049520] 'process raft request'  (duration: 133.132457ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:11:03.784318Z","caller":"traceutil/trace.go:171","msg":"trace[1393446950] transaction","detail":"{read_only:false; response_revision:322; number_of_response:1; }","duration":"154.877453ms","start":"2025-11-24T03:11:03.629431Z","end":"2025-11-24T03:11:03.784308Z","steps":["trace[1393446950] 'process raft request'  (duration: 153.814677ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:11:03.787882Z","caller":"traceutil/trace.go:171","msg":"trace[1936776207] transaction","detail":"{read_only:false; response_revision:324; number_of_response:1; }","duration":"133.541357ms","start":"2025-11-24T03:11:03.650551Z","end":"2025-11-24T03:11:03.784092Z","steps":["trace[1936776207] 'process raft request'  (duration: 132.790773ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:11:31 up  1:53,  0 user,  load average: 5.88, 4.18, 2.58
	Linux old-k8s-version-579951 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [10be38f00ddd911306129a010417ccdb8eb7ea5c310b3bf7026783df2a597060] <==
	I1124 03:11:07.353295       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:11:07.353564       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 03:11:07.353700       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:11:07.353717       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:11:07.353736       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:11:07Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:11:07.554411       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:11:07.554460       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:11:07.554476       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:11:07.554610       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:11:08.047785       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:11:08.047815       1 metrics.go:72] Registering metrics
	I1124 03:11:08.047928       1 controller.go:711] "Syncing nftables rules"
	I1124 03:11:17.561970       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:11:17.562007       1 main.go:301] handling current node
	I1124 03:11:27.555106       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:11:27.555156       1 main.go:301] handling current node
	
	
	==> kube-apiserver [484e1d7b5c711f732ff62b7b009c99e922e7210bab37bc314fd985a0d322b8ab] <==
	I1124 03:10:47.773634       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 03:10:47.773647       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 03:10:47.774101       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 03:10:47.776415       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 03:10:47.777482       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 03:10:47.777515       1 aggregator.go:166] initial CRD sync complete...
	I1124 03:10:47.777522       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 03:10:47.777529       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 03:10:47.777537       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:10:47.803117       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:10:48.680613       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:10:48.687048       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:10:48.687076       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:10:49.293228       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:10:49.332868       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:10:49.744482       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 03:10:49.788432       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:10:49.798010       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1124 03:10:49.799414       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 03:10:49.807709       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:10:51.568489       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 03:10:51.590719       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:10:51.608720       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 03:11:03.157563       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 03:11:03.501509       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [74e73db182bd9e5767d87464663985a376758aca9f47a8500408acb05267e204] <==
	I1124 03:11:03.343529       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 03:11:03.391618       1 shared_informer.go:318] Caches are synced for disruption
	I1124 03:11:03.393947       1 shared_informer.go:318] Caches are synced for stateful set
	I1124 03:11:03.400715       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 03:11:03.422277       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1124 03:11:03.625364       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-r82jh"
	I1124 03:11:03.647525       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-gdpzl"
	I1124 03:11:03.727089       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 03:11:03.743716       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 03:11:03.743751       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 03:11:03.789733       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-rqdf6"
	I1124 03:11:03.804420       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-5nwx9"
	I1124 03:11:03.819618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="397.052201ms"
	I1124 03:11:03.833402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.724914ms"
	I1124 03:11:03.833668       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.893µs"
	I1124 03:11:04.307146       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 03:11:04.330570       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-rqdf6"
	I1124 03:11:04.342134       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="34.787573ms"
	I1124 03:11:04.353388       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.122046ms"
	I1124 03:11:04.353875       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="57.017µs"
	I1124 03:11:17.651016       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="147.553µs"
	I1124 03:11:17.660269       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.711µs"
	I1124 03:11:18.207957       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1124 03:11:18.800470       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.425331ms"
	I1124 03:11:18.800558       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.871µs"
	
	
	==> kube-proxy [b2c1d9af229900f3bfd972582a600067c06f89b26590f201765af73182be85f9] <==
	I1124 03:11:04.255038       1 server_others.go:69] "Using iptables proxy"
	I1124 03:11:04.281345       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1124 03:11:04.359004       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:11:04.365475       1 server_others.go:152] "Using iptables Proxier"
	I1124 03:11:04.365522       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 03:11:04.365634       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 03:11:04.365716       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 03:11:04.366853       1 server.go:846] "Version info" version="v1.28.0"
	I1124 03:11:04.367035       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:11:04.369092       1 config.go:188] "Starting service config controller"
	I1124 03:11:04.369177       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 03:11:04.369607       1 config.go:315] "Starting node config controller"
	I1124 03:11:04.369667       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 03:11:04.369816       1 config.go:97] "Starting endpoint slice config controller"
	I1124 03:11:04.369849       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 03:11:04.469550       1 shared_informer.go:318] Caches are synced for service config
	I1124 03:11:04.470722       1 shared_informer.go:318] Caches are synced for node config
	I1124 03:11:04.471909       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5c45086ec9a9587a07369a5e59e860e4d9cc205236b096502d15885e746adc67] <==
	E1124 03:10:47.767063       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1124 03:10:47.766525       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1124 03:10:47.766102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1124 03:10:47.767096       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1124 03:10:48.612603       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1124 03:10:48.612643       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1124 03:10:48.636571       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1124 03:10:48.636618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1124 03:10:48.689523       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1124 03:10:48.689649       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1124 03:10:48.722757       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1124 03:10:48.722906       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1124 03:10:48.758949       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1124 03:10:48.759227       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1124 03:10:48.820088       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1124 03:10:48.822091       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1124 03:10:48.895251       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1124 03:10:48.895297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1124 03:10:48.959841       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 03:10:48.959879       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1124 03:10:48.980037       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1124 03:10:48.980074       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1124 03:10:49.166093       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1124 03:10:49.166214       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1124 03:10:51.062031       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 03:11:03 old-k8s-version-579951 kubelet[1365]: I1124 03:11:03.283016    1365 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:11:03 old-k8s-version-579951 kubelet[1365]: I1124 03:11:03.784795    1365 topology_manager.go:215] "Topology Admit Handler" podUID="07210933-4da6-4a8e-b29f-15bc6a74911b" podNamespace="kube-system" podName="kube-proxy-r82jh"
	Nov 24 03:11:03 old-k8s-version-579951 kubelet[1365]: I1124 03:11:03.786988    1365 topology_manager.go:215] "Topology Admit Handler" podUID="c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d" podNamespace="kube-system" podName="kindnet-gdpzl"
	Nov 24 03:11:03 old-k8s-version-579951 kubelet[1365]: I1124 03:11:03.833231    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/07210933-4da6-4a8e-b29f-15bc6a74911b-xtables-lock\") pod \"kube-proxy-r82jh\" (UID: \"07210933-4da6-4a8e-b29f-15bc6a74911b\") " pod="kube-system/kube-proxy-r82jh"
	Nov 24 03:11:03 old-k8s-version-579951 kubelet[1365]: I1124 03:11:03.833288    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/07210933-4da6-4a8e-b29f-15bc6a74911b-lib-modules\") pod \"kube-proxy-r82jh\" (UID: \"07210933-4da6-4a8e-b29f-15bc6a74911b\") " pod="kube-system/kube-proxy-r82jh"
	Nov 24 03:11:03 old-k8s-version-579951 kubelet[1365]: I1124 03:11:03.833342    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl9vh\" (UniqueName: \"kubernetes.io/projected/07210933-4da6-4a8e-b29f-15bc6a74911b-kube-api-access-sl9vh\") pod \"kube-proxy-r82jh\" (UID: \"07210933-4da6-4a8e-b29f-15bc6a74911b\") " pod="kube-system/kube-proxy-r82jh"
	Nov 24 03:11:03 old-k8s-version-579951 kubelet[1365]: I1124 03:11:03.833388    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d-lib-modules\") pod \"kindnet-gdpzl\" (UID: \"c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d\") " pod="kube-system/kindnet-gdpzl"
	Nov 24 03:11:03 old-k8s-version-579951 kubelet[1365]: I1124 03:11:03.833424    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/07210933-4da6-4a8e-b29f-15bc6a74911b-kube-proxy\") pod \"kube-proxy-r82jh\" (UID: \"07210933-4da6-4a8e-b29f-15bc6a74911b\") " pod="kube-system/kube-proxy-r82jh"
	Nov 24 03:11:03 old-k8s-version-579951 kubelet[1365]: I1124 03:11:03.833473    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d-cni-cfg\") pod \"kindnet-gdpzl\" (UID: \"c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d\") " pod="kube-system/kindnet-gdpzl"
	Nov 24 03:11:03 old-k8s-version-579951 kubelet[1365]: I1124 03:11:03.833511    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d-xtables-lock\") pod \"kindnet-gdpzl\" (UID: \"c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d\") " pod="kube-system/kindnet-gdpzl"
	Nov 24 03:11:03 old-k8s-version-579951 kubelet[1365]: I1124 03:11:03.833572    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v49k8\" (UniqueName: \"kubernetes.io/projected/c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d-kube-api-access-v49k8\") pod \"kindnet-gdpzl\" (UID: \"c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d\") " pod="kube-system/kindnet-gdpzl"
	Nov 24 03:11:07 old-k8s-version-579951 kubelet[1365]: I1124 03:11:07.755858    1365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-r82jh" podStartSLOduration=4.75580701 podCreationTimestamp="2025-11-24 03:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:04.753250797 +0000 UTC m=+13.229247797" watchObservedRunningTime="2025-11-24 03:11:07.75580701 +0000 UTC m=+16.231804004"
	Nov 24 03:11:07 old-k8s-version-579951 kubelet[1365]: I1124 03:11:07.756002    1365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-gdpzl" podStartSLOduration=1.719426235 podCreationTimestamp="2025-11-24 03:11:03 +0000 UTC" firstStartedPulling="2025-11-24 03:11:04.115129896 +0000 UTC m=+12.591126893" lastFinishedPulling="2025-11-24 03:11:07.15167167 +0000 UTC m=+15.627668660" observedRunningTime="2025-11-24 03:11:07.752861121 +0000 UTC m=+16.228858130" watchObservedRunningTime="2025-11-24 03:11:07.755968002 +0000 UTC m=+16.231965001"
	Nov 24 03:11:17 old-k8s-version-579951 kubelet[1365]: I1124 03:11:17.627673    1365 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 03:11:17 old-k8s-version-579951 kubelet[1365]: I1124 03:11:17.651262    1365 topology_manager.go:215] "Topology Admit Handler" podUID="1278c848-f63d-4e7c-879a-523510d29787" podNamespace="kube-system" podName="coredns-5dd5756b68-5nwx9"
	Nov 24 03:11:17 old-k8s-version-579951 kubelet[1365]: I1124 03:11:17.651478    1365 topology_manager.go:215] "Topology Admit Handler" podUID="b994a9c9-e16e-40e8-b8eb-682c5dfa7372" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 03:11:17 old-k8s-version-579951 kubelet[1365]: I1124 03:11:17.734420    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b994a9c9-e16e-40e8-b8eb-682c5dfa7372-tmp\") pod \"storage-provisioner\" (UID: \"b994a9c9-e16e-40e8-b8eb-682c5dfa7372\") " pod="kube-system/storage-provisioner"
	Nov 24 03:11:17 old-k8s-version-579951 kubelet[1365]: I1124 03:11:17.734494    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsckd\" (UniqueName: \"kubernetes.io/projected/b994a9c9-e16e-40e8-b8eb-682c5dfa7372-kube-api-access-xsckd\") pod \"storage-provisioner\" (UID: \"b994a9c9-e16e-40e8-b8eb-682c5dfa7372\") " pod="kube-system/storage-provisioner"
	Nov 24 03:11:17 old-k8s-version-579951 kubelet[1365]: I1124 03:11:17.734597    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5vq9\" (UniqueName: \"kubernetes.io/projected/1278c848-f63d-4e7c-879a-523510d29787-kube-api-access-f5vq9\") pod \"coredns-5dd5756b68-5nwx9\" (UID: \"1278c848-f63d-4e7c-879a-523510d29787\") " pod="kube-system/coredns-5dd5756b68-5nwx9"
	Nov 24 03:11:17 old-k8s-version-579951 kubelet[1365]: I1124 03:11:17.734698    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1278c848-f63d-4e7c-879a-523510d29787-config-volume\") pod \"coredns-5dd5756b68-5nwx9\" (UID: \"1278c848-f63d-4e7c-879a-523510d29787\") " pod="kube-system/coredns-5dd5756b68-5nwx9"
	Nov 24 03:11:18 old-k8s-version-579951 kubelet[1365]: I1124 03:11:18.777635    1365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.777582633 podCreationTimestamp="2025-11-24 03:11:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:18.776924942 +0000 UTC m=+27.252921939" watchObservedRunningTime="2025-11-24 03:11:18.777582633 +0000 UTC m=+27.253579630"
	Nov 24 03:11:18 old-k8s-version-579951 kubelet[1365]: I1124 03:11:18.792772    1365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-5nwx9" podStartSLOduration=15.792725181 podCreationTimestamp="2025-11-24 03:11:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:18.792300458 +0000 UTC m=+27.268297455" watchObservedRunningTime="2025-11-24 03:11:18.792725181 +0000 UTC m=+27.268722180"
	Nov 24 03:11:20 old-k8s-version-579951 kubelet[1365]: I1124 03:11:20.731858    1365 topology_manager.go:215] "Topology Admit Handler" podUID="b61ae335-3755-4f88-9305-030d7d7fd2e7" podNamespace="default" podName="busybox"
	Nov 24 03:11:20 old-k8s-version-579951 kubelet[1365]: I1124 03:11:20.756526    1365 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6p64\" (UniqueName: \"kubernetes.io/projected/b61ae335-3755-4f88-9305-030d7d7fd2e7-kube-api-access-p6p64\") pod \"busybox\" (UID: \"b61ae335-3755-4f88-9305-030d7d7fd2e7\") " pod="default/busybox"
	Nov 24 03:11:22 old-k8s-version-579951 kubelet[1365]: I1124 03:11:22.792387    1365 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.091062088 podCreationTimestamp="2025-11-24 03:11:20 +0000 UTC" firstStartedPulling="2025-11-24 03:11:21.057180104 +0000 UTC m=+29.533177091" lastFinishedPulling="2025-11-24 03:11:21.758445689 +0000 UTC m=+30.234442668" observedRunningTime="2025-11-24 03:11:22.791951874 +0000 UTC m=+31.267948871" watchObservedRunningTime="2025-11-24 03:11:22.792327665 +0000 UTC m=+31.268324661"
	
	
	==> storage-provisioner [9c02787675e9a185bf714c6e29b94ae0a90043919509ae62a5337965ee786025] <==
	I1124 03:11:18.043387       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:11:18.060432       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:11:18.060663       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 03:11:18.074810       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:11:18.075868       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"59a77692-accc-462a-ac9b-8cd00bada505", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-579951_d3d6c0bc-17a4-4dba-8019-da847dd8abfa became leader
	I1124 03:11:18.075957       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-579951_d3d6c0bc-17a4-4dba-8019-da847dd8abfa!
	I1124 03:11:18.176839       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-579951_d3d6c0bc-17a4-4dba-8019-da847dd8abfa!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-579951 -n old-k8s-version-579951
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-579951 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-993813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-993813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (237.027096ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:11:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-993813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-993813 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-993813 describe deploy/metrics-server -n kube-system: exit status 1 (54.587202ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-993813 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-993813
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-993813:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8",
	        "Created": "2025-11-24T03:10:55.916288058Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 639087,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:10:56.128214298Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8/hosts",
	        "LogPath": "/var/lib/docker/containers/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8-json.log",
	        "Name": "/default-k8s-diff-port-993813",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-993813:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-993813",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8",
	                "LowerDir": "/var/lib/docker/overlay2/6fe49d723a85944578f35e13638f43b6277cc82d6ac33569536577f7d90c4edd-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6fe49d723a85944578f35e13638f43b6277cc82d6ac33569536577f7d90c4edd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6fe49d723a85944578f35e13638f43b6277cc82d6ac33569536577f7d90c4edd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6fe49d723a85944578f35e13638f43b6277cc82d6ac33569536577f7d90c4edd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-993813",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-993813/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-993813",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-993813",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-993813",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6ea588ecab3e9aa8af90b2c3546efe0c72d672e2f6b7bb05bd20c29ad87caf79",
	            "SandboxKey": "/var/run/docker/netns/6ea588ecab3e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33468"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33469"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33472"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33470"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33471"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-993813": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "50b2e4e61586f7fb59c4f56c2607ad50e6dc9faf4b2e274df27c397b878fe391",
	                    "EndpointID": "f7a74a2a9008aad5b30bf8bc83d5d6b0da359872433815c9e49f2ff8cd0dc930",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "2e:d8:38:8c:21:25",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-993813",
	                        "b38aecdd5f9d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993813 -n default-k8s-diff-port-993813
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-993813 logs -n 25
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p flannel-965704 sudo systemctl status docker --all --full --no-pager                                                                                                                                                                        │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo systemctl cat docker --no-pager                                                                                                                                                                                        │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /etc/docker/daemon.json                                                                                                                                                                                            │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo docker system info                                                                                                                                                                                                     │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ start   │ -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:11 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                               │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                         │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cri-dockerd --version                                                                                                                                                                                                  │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo systemctl cat containerd --no-pager                                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                             │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /etc/containerd/config.toml                                                                                                                                                                                        │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo containerd config dump                                                                                                                                                                                                 │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo crio config                                                                                                                                                                                                            │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ delete  │ -p flannel-965704                                                                                                                                                                                                                             │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:11 UTC │
	│ addons  │ enable metrics-server -p newest-cni-438041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-579951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p newest-cni-438041 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p old-k8s-version-579951 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:10:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:10:57.127829  639611 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:10:57.127990  639611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:10:57.128000  639611 out.go:374] Setting ErrFile to fd 2...
	I1124 03:10:57.128004  639611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:10:57.128242  639611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:10:57.128839  639611 out.go:368] Setting JSON to false
	I1124 03:10:57.129993  639611 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6804,"bootTime":1763947053,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:10:57.130043  639611 start.go:143] virtualization: kvm guest
	I1124 03:10:57.131842  639611 out.go:179] * [newest-cni-438041] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:10:57.133006  639611 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:10:57.133003  639611 notify.go:221] Checking for updates...
	I1124 03:10:57.135165  639611 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:10:57.136402  639611 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:10:57.137671  639611 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:10:57.138741  639611 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:10:57.139904  639611 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:10:57.141390  639611 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:10:57.141496  639611 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:10:57.141578  639611 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:10:57.141703  639611 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:10:57.166641  639611 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:10:57.166738  639611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:10:57.221961  639611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-24 03:10:57.211378242 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:10:57.222054  639611 docker.go:319] overlay module found
	I1124 03:10:57.223745  639611 out.go:179] * Using the docker driver based on user configuration
	I1124 03:10:57.224957  639611 start.go:309] selected driver: docker
	I1124 03:10:57.224977  639611 start.go:927] validating driver "docker" against <nil>
	I1124 03:10:57.224994  639611 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:10:57.225758  639611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:10:57.290865  639611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-24 03:10:57.279924959 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:10:57.291115  639611 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1124 03:10:57.291161  639611 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1124 03:10:57.291452  639611 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 03:10:57.293881  639611 out.go:179] * Using Docker driver with root privileges
	I1124 03:10:57.295058  639611 cni.go:84] Creating CNI manager for ""
	I1124 03:10:57.295146  639611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:10:57.295161  639611 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:10:57.295265  639611 start.go:353] cluster config:
	{Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:10:57.296817  639611 out.go:179] * Starting "newest-cni-438041" primary control-plane node in "newest-cni-438041" cluster
	I1124 03:10:57.297866  639611 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:10:57.299907  639611 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:10:57.301070  639611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:10:57.301103  639611 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:10:57.301112  639611 cache.go:65] Caching tarball of preloaded images
	I1124 03:10:57.301177  639611 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:10:57.301210  639611 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:10:57.301222  639611 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:10:57.301343  639611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json ...
	I1124 03:10:57.301366  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json: {Name:mk1bf53574cdc9152c6531d50672e7a950b9d2f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:10:57.325407  639611 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:10:57.325433  639611 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:10:57.325454  639611 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:10:57.325494  639611 start.go:360] acquireMachinesLock for newest-cni-438041: {Name:mk895e89056f5ce7564002ba75457dcfde41ce4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:10:57.325596  639611 start.go:364] duration metric: took 82.202µs to acquireMachinesLock for "newest-cni-438041"
	I1124 03:10:57.325624  639611 start.go:93] Provisioning new machine with config: &{Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:10:57.325724  639611 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:10:55.541109  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (3.244075519s)
	I1124 03:10:55.541150  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1124 03:10:55.541172  631782 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 03:10:55.541227  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 03:10:56.794831  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.25357343s)
	I1124 03:10:56.794863  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 03:10:56.794908  631782 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 03:10:56.794989  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 03:10:55.833612  636397 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993813:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (5.620337954s)
	I1124 03:10:55.833645  636397 kic.go:203] duration metric: took 5.620509753s to extract preloaded images to volume ...
	W1124 03:10:55.833730  636397 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:10:55.833774  636397 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:10:55.833824  636397 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:10:55.899529  636397 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-993813 --name default-k8s-diff-port-993813 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993813 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-993813 --network default-k8s-diff-port-993813 --ip 192.168.76.2 --volume default-k8s-diff-port-993813:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:10:56.489655  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Running}}
	I1124 03:10:56.513036  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:10:56.535229  636397 cli_runner.go:164] Run: docker exec default-k8s-diff-port-993813 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:10:56.595848  636397 oci.go:144] the created container "default-k8s-diff-port-993813" has a running status.
	I1124 03:10:56.595922  636397 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa...
	I1124 03:10:56.701587  636397 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:10:56.875193  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:10:56.894915  636397 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:10:56.894937  636397 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-993813 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:10:56.946242  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:10:56.964911  636397 machine.go:94] provisionDockerMachine start ...
	I1124 03:10:56.965003  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:10:56.983380  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:10:56.983615  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:10:56.983627  636397 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:10:56.984346  636397 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37014->127.0.0.1:33468: read: connection reset by peer
	I1124 03:10:57.234863  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:57.734595  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:58.234694  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:58.734330  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:59.234707  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:59.735106  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:00.234710  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:00.735086  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:01.235238  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:01.735122  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:57.328166  639611 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:10:57.328471  639611 start.go:159] libmachine.API.Create for "newest-cni-438041" (driver="docker")
	I1124 03:10:57.328503  639611 client.go:173] LocalClient.Create starting
	I1124 03:10:57.328585  639611 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 03:10:57.328619  639611 main.go:143] libmachine: Decoding PEM data...
	I1124 03:10:57.328645  639611 main.go:143] libmachine: Parsing certificate...
	I1124 03:10:57.328730  639611 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 03:10:57.328758  639611 main.go:143] libmachine: Decoding PEM data...
	I1124 03:10:57.328776  639611 main.go:143] libmachine: Parsing certificate...
	I1124 03:10:57.329238  639611 cli_runner.go:164] Run: docker network inspect newest-cni-438041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:10:57.347161  639611 cli_runner.go:211] docker network inspect newest-cni-438041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:10:57.347240  639611 network_create.go:284] running [docker network inspect newest-cni-438041] to gather additional debugging logs...
	I1124 03:10:57.347259  639611 cli_runner.go:164] Run: docker network inspect newest-cni-438041
	W1124 03:10:57.366750  639611 cli_runner.go:211] docker network inspect newest-cni-438041 returned with exit code 1
	I1124 03:10:57.366777  639611 network_create.go:287] error running [docker network inspect newest-cni-438041]: docker network inspect newest-cni-438041: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-438041 not found
	I1124 03:10:57.366807  639611 network_create.go:289] output of [docker network inspect newest-cni-438041]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-438041 not found
	
	** /stderr **
	I1124 03:10:57.366976  639611 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:10:57.385293  639611 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-58688175ab6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:9d:6a:fc:5c:13} reservation:<nil>}
	I1124 03:10:57.386152  639611 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf033568456f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:35:42:16:23:28} reservation:<nil>}
	I1124 03:10:57.387409  639611 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ecb12099844 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ce:07:19:c6:91:6e} reservation:<nil>}
	I1124 03:10:57.388971  639611 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-50b2e4e61586 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:0d:88:19:7d:df} reservation:<nil>}
	I1124 03:10:57.389487  639611 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6fb41680caed IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:a0:6c:44:95:b2} reservation:<nil>}
	I1124 03:10:57.390236  639611 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018f44a0}
	I1124 03:10:57.390257  639611 network_create.go:124] attempt to create docker network newest-cni-438041 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 03:10:57.390305  639611 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-438041 newest-cni-438041
	I1124 03:10:57.440525  639611 network_create.go:108] docker network newest-cni-438041 192.168.94.0/24 created
	I1124 03:10:57.440568  639611 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-438041" container
	I1124 03:10:57.440642  639611 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:10:57.458704  639611 cli_runner.go:164] Run: docker volume create newest-cni-438041 --label name.minikube.sigs.k8s.io=newest-cni-438041 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:10:57.476351  639611 oci.go:103] Successfully created a docker volume newest-cni-438041
	I1124 03:10:57.476450  639611 cli_runner.go:164] Run: docker run --rm --name newest-cni-438041-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-438041 --entrypoint /usr/bin/test -v newest-cni-438041:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:10:58.353729  639611 oci.go:107] Successfully prepared a docker volume newest-cni-438041
	I1124 03:10:58.353794  639611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:10:58.353806  639611 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:10:58.353903  639611 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-438041:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:10:58.184837  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.389817981s)
	I1124 03:10:58.184869  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1124 03:10:58.184909  631782 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:10:58.184953  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:11:00.135230  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:11:00.135263  636397 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-993813"
	I1124 03:11:00.135337  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.156666  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:00.157040  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:11:00.157061  636397 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993813 && echo "default-k8s-diff-port-993813" | sudo tee /etc/hostname
	I1124 03:11:00.317337  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:11:00.317424  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.338575  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:00.338824  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:11:00.338843  636397 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993813/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:11:00.487669  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:11:00.487698  636397 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:11:00.487736  636397 ubuntu.go:190] setting up certificates
	I1124 03:11:00.487751  636397 provision.go:84] configureAuth start
	I1124 03:11:00.487815  636397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:11:00.511564  636397 provision.go:143] copyHostCerts
	I1124 03:11:00.511630  636397 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:11:00.511666  636397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:11:00.511735  636397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:11:00.514009  636397 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:11:00.514030  636397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:11:00.514075  636397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:11:00.514159  636397 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:11:00.514167  636397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:11:00.514200  636397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:11:00.514270  636397 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993813 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-993813 localhost minikube]
	I1124 03:11:00.658058  636397 provision.go:177] copyRemoteCerts
	I1124 03:11:00.658133  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:11:00.658198  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.678015  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:00.787811  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:11:00.908237  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 03:11:00.926667  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:11:00.945146  636397 provision.go:87] duration metric: took 457.380171ms to configureAuth
	I1124 03:11:00.945175  636397 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:11:00.945368  636397 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:00.945497  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.963523  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:00.963843  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:11:00.963867  636397 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:11:01.528016  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:11:01.528042  636397 machine.go:97] duration metric: took 4.563106275s to provisionDockerMachine
	I1124 03:11:01.528055  636397 client.go:176] duration metric: took 12.433514854s to LocalClient.Create
	I1124 03:11:01.528076  636397 start.go:167] duration metric: took 12.433610792s to libmachine.API.Create "default-k8s-diff-port-993813"
	I1124 03:11:01.528087  636397 start.go:293] postStartSetup for "default-k8s-diff-port-993813" (driver="docker")
	I1124 03:11:01.528107  636397 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:11:01.528192  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:11:01.528250  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:01.550426  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:01.725783  636397 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:11:01.731121  636397 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:11:01.731156  636397 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:11:01.731171  636397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:11:01.731245  636397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:11:01.731344  636397 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:11:01.731461  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:11:01.741273  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:02.020513  636397 start.go:296] duration metric: took 492.40359ms for postStartSetup
	I1124 03:11:02.119944  636397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:11:02.137546  636397 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/config.json ...
	I1124 03:11:02.185355  636397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:11:02.185405  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:02.201426  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:02.297393  636397 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:11:02.302398  636397 start.go:128] duration metric: took 13.210072434s to createHost
	I1124 03:11:02.302422  636397 start.go:83] releasing machines lock for "default-k8s-diff-port-993813", held for 13.210223546s
	I1124 03:11:02.302502  636397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:11:02.319872  636397 ssh_runner.go:195] Run: cat /version.json
	I1124 03:11:02.319913  636397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:11:02.319948  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:02.319995  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:02.340353  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:02.340353  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:02.486835  636397 ssh_runner.go:195] Run: systemctl --version
	I1124 03:11:02.493433  636397 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:11:02.533294  636397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:11:02.538557  636397 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:11:02.538616  636397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:11:02.908750  636397 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:11:02.908778  636397 start.go:496] detecting cgroup driver to use...
	I1124 03:11:02.908812  636397 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:11:02.908861  636397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:11:02.925941  636397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:11:02.941046  636397 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:11:02.941102  636397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:11:02.959121  636397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:11:02.975801  636397 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:11:03.054110  636397 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:11:03.174491  636397 docker.go:234] disabling docker service ...
	I1124 03:11:03.174560  636397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:11:03.193664  636397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:11:03.207203  636397 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:11:03.340321  636397 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:11:03.515878  636397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:11:03.529161  636397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:11:03.543103  636397 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:11:03.543166  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.604968  636397 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:11:03.605035  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.624611  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.645648  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.689119  636397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:11:03.698440  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.783084  636397 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:02.234544  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:02.735113  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:03.234728  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:03.735125  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:03.823251  623347 kubeadm.go:1114] duration metric: took 11.180431183s to wait for elevateKubeSystemPrivileges
	I1124 03:11:03.823284  623347 kubeadm.go:403] duration metric: took 22.234422884s to StartCluster
	I1124 03:11:03.823307  623347 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:03.823374  623347 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:03.824432  623347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:03.824684  623347 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:03.824740  623347 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:03.824845  623347 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-579951"
	I1124 03:11:03.824727  623347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:03.824906  623347 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-579951"
	I1124 03:11:03.824917  623347 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:11:03.824923  623347 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-579951"
	I1124 03:11:03.824900  623347 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-579951"
	I1124 03:11:03.825024  623347 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:11:03.825377  623347 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:03.825590  623347 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:03.826953  623347 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:03.828395  623347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:03.862253  623347 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-579951"
	I1124 03:11:03.862302  623347 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:11:03.862810  623347 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:03.864365  623347 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:03.807318  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.820946  636397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:11:03.839099  636397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:11:03.853603  636397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:04.008696  636397 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:11:04.280958  636397 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:11:04.281140  636397 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:11:04.287138  636397 start.go:564] Will wait 60s for crictl version
	I1124 03:11:04.287195  636397 ssh_runner.go:195] Run: which crictl
	I1124 03:11:04.296400  636397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:11:04.343627  636397 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:11:04.343993  636397 ssh_runner.go:195] Run: crio --version
	I1124 03:11:04.389849  636397 ssh_runner.go:195] Run: crio --version
	I1124 03:11:04.426944  636397 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:11:03.866933  623347 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:03.866992  623347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:03.867050  623347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:03.908181  623347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:03.911219  623347 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:03.911443  623347 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:03.911619  623347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:03.949048  623347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:03.966864  623347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:04.039230  623347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:04.056821  623347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:04.079844  623347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:04.252855  623347 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:04.253835  623347 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-579951" to be "Ready" ...
	I1124 03:11:04.604404  623347 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:04.605457  623347 addons.go:530] duration metric: took 780.71049ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:04.763969  623347 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-579951" context rescaled to 1 replicas
	W1124 03:11:06.257869  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	I1124 03:11:03.812979  639611 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-438041:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (5.459016714s)
	I1124 03:11:03.813017  639611 kic.go:203] duration metric: took 5.459207202s to extract preloaded images to volume ...
	W1124 03:11:03.813173  639611 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:11:03.813255  639611 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:11:03.813304  639611 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:11:03.930433  639611 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-438041 --name newest-cni-438041 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-438041 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-438041 --network newest-cni-438041 --ip 192.168.94.2 --volume newest-cni-438041:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:11:04.484106  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Running}}
	I1124 03:11:04.506492  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:04.527784  639611 cli_runner.go:164] Run: docker exec newest-cni-438041 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:11:04.586541  639611 oci.go:144] the created container "newest-cni-438041" has a running status.
	I1124 03:11:04.586577  639611 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa...
	I1124 03:11:04.720361  639611 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:11:04.758530  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:04.794751  639611 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:11:04.794778  639611 kic_runner.go:114] Args: [docker exec --privileged newest-cni-438041 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:11:04.848966  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:04.868444  639611 machine.go:94] provisionDockerMachine start ...
	I1124 03:11:04.868542  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:04.886704  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:04.887098  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:04.887115  639611 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:11:04.887825  639611 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60056->127.0.0.1:33473: read: connection reset by peer
	I1124 03:11:03.698009  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.513031284s)
	I1124 03:11:03.698036  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 03:11:03.698072  631782 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:11:03.698135  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:11:04.540749  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 03:11:04.540878  631782 cache_images.go:125] Successfully loaded all cached images
	I1124 03:11:04.540962  631782 cache_images.go:94] duration metric: took 16.632965714s to LoadCachedImages
	I1124 03:11:04.540998  631782 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 03:11:04.541478  631782 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-603010 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:04.541629  631782 ssh_runner.go:195] Run: crio config
	I1124 03:11:04.613074  631782 cni.go:84] Creating CNI manager for ""
	I1124 03:11:04.613101  631782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:04.613135  631782 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:11:04.613165  631782 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603010 NodeName:no-preload-603010 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:04.613332  631782 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603010"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:04.613410  631782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:04.624805  631782 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 03:11:04.624880  631782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:04.636504  631782 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1124 03:11:04.636570  631782 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1124 03:11:04.636598  631782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 03:11:04.637106  631782 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1124 03:11:04.641001  631782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 03:11:04.641031  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1124 03:11:05.924351  631782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:11:05.942273  631782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 03:11:05.947268  631782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 03:11:05.947299  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1124 03:11:06.319700  631782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 03:11:06.328312  631782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 03:11:06.328362  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1124 03:11:06.576699  631782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:06.584640  631782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:11:06.596881  631782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:06.706372  631782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1124 03:11:06.725651  631782 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:06.731312  631782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:06.856376  631782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:06.964324  631782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:06.983343  631782 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010 for IP: 192.168.85.2
	I1124 03:11:06.983368  631782 certs.go:195] generating shared ca certs ...
	I1124 03:11:06.983389  631782 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:06.983554  631782 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:06.983623  631782 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:06.983638  631782 certs.go:257] generating profile certs ...
	I1124 03:11:06.983713  631782 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key
	I1124 03:11:06.983731  631782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.crt with IP's: []
	I1124 03:11:07.236879  631782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.crt ...
	I1124 03:11:07.236911  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.crt: {Name:mk2d55635da2a9326437d41d4577da0fe14409fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.237058  631782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key ...
	I1124 03:11:07.237070  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key: {Name:mkaa577d5c9ee92828884715bd0dda9017fc9779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.237153  631782 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738
	I1124 03:11:07.237166  631782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 03:11:07.327953  631782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738 ...
	I1124 03:11:07.327981  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738: {Name:mk8a9cae6d8e3a4cc6d6140e38080bb869e23acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.328138  631782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738 ...
	I1124 03:11:07.328156  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738: {Name:mkbf13b81ddaf24f4938052522adb9836ef8e1d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.328261  631782 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt
	I1124 03:11:07.328354  631782 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key
	I1124 03:11:07.328436  631782 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key
	I1124 03:11:07.328458  631782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt with IP's: []
	I1124 03:11:07.358779  631782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt ...
	I1124 03:11:07.358798  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt: {Name:mk394a0184e993e66f37c39d12264673ee1326c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.358929  631782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key ...
	I1124 03:11:07.358944  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key: {Name:mkf0922c5b9c127348bd0d94fa6adc983ccc147a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.359146  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:07.359197  631782 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:07.359210  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:07.359245  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:07.359288  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:07.359324  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:07.359391  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:07.360046  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:07.377802  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:07.394719  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:07.411226  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:07.427651  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:11:07.443818  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:11:07.461178  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:07.477210  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:11:07.493639  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:07.511874  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:07.528421  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:07.544763  631782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:07.557346  631782 ssh_runner.go:195] Run: openssl version
	I1124 03:11:07.563499  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:07.571402  631782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:07.574952  631782 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:07.575004  631782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:07.608612  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:07.616619  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:07.624657  631782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:07.628272  631782 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:07.628318  631782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:07.662522  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:07.670558  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:07.678360  631782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:07.681796  631782 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:07.681850  631782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:07.715936  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:07.723734  631782 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:07.727008  631782 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:11:07.727066  631782 kubeadm.go:401] StartCluster: {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:07.727159  631782 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:07.727200  631782 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:07.757836  631782 cri.go:89] found id: ""
	I1124 03:11:07.757930  631782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:07.767026  631782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:11:07.775281  631782 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:11:07.775329  631782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:11:07.782944  631782 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:11:07.782960  631782 kubeadm.go:158] found existing configuration files:
	
	I1124 03:11:07.782996  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:11:07.790173  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:11:07.790211  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:11:07.797407  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:11:07.804469  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:11:07.804513  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:11:07.811339  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:11:07.818449  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:11:07.818485  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:11:07.825301  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:11:07.832368  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:11:07.832409  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:11:07.839105  631782 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:11:07.875134  631782 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:11:07.875186  631782 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:11:07.899771  631782 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:11:07.899860  631782 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:11:07.899936  631782 kubeadm.go:319] OS: Linux
	I1124 03:11:07.900023  631782 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:11:07.900109  631782 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:11:07.900181  631782 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:11:07.900246  631782 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:11:07.900310  631782 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:11:07.900374  631782 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:11:07.900436  631782 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:11:07.900489  631782 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:11:07.966533  631782 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:11:07.966689  631782 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:11:07.966849  631782 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:11:07.981358  631782 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:11:04.428062  636397 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:11:04.452862  636397 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:11:04.458281  636397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:04.471103  636397 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:11:04.471281  636397 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:11:04.471346  636397 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:04.523060  636397 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:04.523089  636397 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:11:04.523147  636397 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:04.562653  636397 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:04.562684  636397 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:11:04.562695  636397 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 03:11:04.562806  636397 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:04.562939  636397 ssh_runner.go:195] Run: crio config
	I1124 03:11:04.638357  636397 cni.go:84] Creating CNI manager for ""
	I1124 03:11:04.638382  636397 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:04.638402  636397 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:11:04.638430  636397 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993813 NodeName:default-k8s-diff-port-993813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:04.638602  636397 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993813"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:04.638670  636397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:04.649639  636397 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:11:04.649707  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:04.665638  636397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 03:11:04.685753  636397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:04.706728  636397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 03:11:04.727449  636397 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:04.732474  636397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:04.750204  636397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:04.878850  636397 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:04.905254  636397 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813 for IP: 192.168.76.2
	I1124 03:11:04.905269  636397 certs.go:195] generating shared ca certs ...
	I1124 03:11:04.905285  636397 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:04.905416  636397 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:04.905456  636397 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:04.905465  636397 certs.go:257] generating profile certs ...
	I1124 03:11:04.905521  636397 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key
	I1124 03:11:04.905533  636397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.crt with IP's: []
	I1124 03:11:05.049206  636397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.crt ...
	I1124 03:11:05.049242  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.crt: {Name:mk818bd7c5f4a63b56241a5f5b815a5c96f8af6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.049427  636397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key ...
	I1124 03:11:05.049453  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key: {Name:mkb83de72d7be9aac5a3b6d7ffec3016949857c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.049582  636397 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619
	I1124 03:11:05.049600  636397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 03:11:05.290005  636397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619 ...
	I1124 03:11:05.290086  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619: {Name:mkbe37296015109a5ee861e9a87e29d9440c243c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.290281  636397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619 ...
	I1124 03:11:05.290300  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619: {Name:mk596e1b3db31f58cc0b8eb40ec231f070ee1f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.290403  636397 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt
	I1124 03:11:05.290503  636397 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key
	I1124 03:11:05.290584  636397 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key
	I1124 03:11:05.290607  636397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt with IP's: []
	I1124 03:11:05.405376  636397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt ...
	I1124 03:11:05.405411  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt: {Name:mk5c1d3bc48ab0dc1254aae88b7ec32711e77a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.405578  636397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key ...
	I1124 03:11:05.405599  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key: {Name:mk42df1886b091d28840c422e5e20c0f8c4e5569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.405873  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:05.405948  636397 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:05.405959  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:05.406001  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:05.406031  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:05.406059  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:05.406113  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:05.406989  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:05.434254  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:05.460107  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:05.485830  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:05.511902  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 03:11:05.535282  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:11:05.558610  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:05.579558  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:11:05.598340  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:05.620622  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:05.644303  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:05.667291  636397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:05.681732  636397 ssh_runner.go:195] Run: openssl version
	I1124 03:11:05.689816  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:05.701038  636397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:05.705646  636397 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:05.705699  636397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:05.763638  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:05.776210  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:05.789125  636397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:05.794258  636397 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:05.794315  636397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:05.853631  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:05.886140  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:05.898078  636397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:05.902187  636397 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:05.902252  636397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:06.009788  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:06.034772  636397 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:06.040075  636397 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:11:06.040136  636397 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:06.040285  636397 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:06.040340  636397 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:06.076603  636397 cri.go:89] found id: ""
	I1124 03:11:06.076664  636397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:06.084730  636397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:11:06.096161  636397 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:11:06.096213  636397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:11:06.104666  636397 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:11:06.104687  636397 kubeadm.go:158] found existing configuration files:
	
	I1124 03:11:06.104736  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1124 03:11:06.112142  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:11:06.112188  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:11:06.119278  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1124 03:11:06.126557  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:11:06.126604  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:11:06.133611  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1124 03:11:06.141319  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:11:06.141384  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:11:06.151450  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1124 03:11:06.162299  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:11:06.162489  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:11:06.173268  636397 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:11:06.365493  636397 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:11:06.445191  636397 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:11:08.034430  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-438041
	
	I1124 03:11:08.034458  639611 ubuntu.go:182] provisioning hostname "newest-cni-438041"
	I1124 03:11:08.034525  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.053306  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:08.053556  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:08.053570  639611 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-438041 && echo "newest-cni-438041" | sudo tee /etc/hostname
	I1124 03:11:08.201604  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-438041
	
	I1124 03:11:08.201678  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.220581  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:08.220950  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:08.220977  639611 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-438041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-438041/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-438041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:11:08.358818  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:11:08.358853  639611 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:11:08.358877  639611 ubuntu.go:190] setting up certificates
	I1124 03:11:08.358902  639611 provision.go:84] configureAuth start
	I1124 03:11:08.358979  639611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:08.377513  639611 provision.go:143] copyHostCerts
	I1124 03:11:08.377573  639611 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:11:08.377584  639611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:11:08.377654  639611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:11:08.377742  639611 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:11:08.377752  639611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:11:08.377785  639611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:11:08.377851  639611 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:11:08.377860  639611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:11:08.377905  639611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:11:08.378033  639611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.newest-cni-438041 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-438041]
	I1124 03:11:08.493906  639611 provision.go:177] copyRemoteCerts
	I1124 03:11:08.493995  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:11:08.494042  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.512353  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:08.611703  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:11:08.635092  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:11:08.653622  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:11:08.675705  639611 provision.go:87] duration metric: took 316.785216ms to configureAuth
	I1124 03:11:08.675736  639611 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:11:08.676005  639611 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:08.676156  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.697718  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:08.698047  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:08.698069  639611 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:11:08.991292  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:11:08.991321  639611 machine.go:97] duration metric: took 4.122852164s to provisionDockerMachine
	I1124 03:11:08.991334  639611 client.go:176] duration metric: took 11.662821141s to LocalClient.Create
	I1124 03:11:08.991367  639611 start.go:167] duration metric: took 11.662898329s to libmachine.API.Create "newest-cni-438041"
	I1124 03:11:08.991381  639611 start.go:293] postStartSetup for "newest-cni-438041" (driver="docker")
	I1124 03:11:08.991395  639611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:11:08.991454  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:11:08.991515  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.009958  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.110159  639611 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:11:09.113555  639611 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:11:09.113584  639611 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:11:09.113597  639611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:11:09.113650  639611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:11:09.113762  639611 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:11:09.113944  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:11:09.121410  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:09.140617  639611 start.go:296] duration metric: took 149.222262ms for postStartSetup
	I1124 03:11:09.141052  639611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:09.158606  639611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json ...
	I1124 03:11:09.158846  639611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:11:09.158906  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.176052  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.271931  639611 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:11:09.276348  639611 start.go:128] duration metric: took 11.950609978s to createHost
	I1124 03:11:09.276376  639611 start.go:83] releasing machines lock for "newest-cni-438041", held for 11.950766604s
	I1124 03:11:09.276440  639611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:09.294908  639611 ssh_runner.go:195] Run: cat /version.json
	I1124 03:11:09.294952  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.294957  639611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:11:09.295031  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.313079  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.314881  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.408772  639611 ssh_runner.go:195] Run: systemctl --version
	I1124 03:11:09.469031  639611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:11:09.504409  639611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:11:09.508820  639611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:11:09.508877  639611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:11:09.533917  639611 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:11:09.533945  639611 start.go:496] detecting cgroup driver to use...
	I1124 03:11:09.533978  639611 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:11:09.534024  639611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:11:09.550223  639611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:11:09.561378  639611 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:11:09.561431  639611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:11:09.576700  639611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:11:09.592718  639611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:11:09.686327  639611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:11:09.778323  639611 docker.go:234] disabling docker service ...
	I1124 03:11:09.778388  639611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:11:09.797725  639611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:11:09.809981  639611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:11:09.897574  639611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:11:09.981763  639611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:11:09.993604  639611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:11:10.008039  639611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:11:10.008088  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.017807  639611 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:11:10.017915  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.026036  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.034318  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.042375  639611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:11:10.050115  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.058198  639611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.071036  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.079079  639611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:11:10.085901  639611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:11:10.092631  639611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:10.187290  639611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:11:10.321446  639611 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:11:10.321516  639611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:11:10.325320  639611 start.go:564] Will wait 60s for crictl version
	I1124 03:11:10.325377  639611 ssh_runner.go:195] Run: which crictl
	I1124 03:11:10.328940  639611 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:11:10.355782  639611 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:11:10.355854  639611 ssh_runner.go:195] Run: crio --version
	I1124 03:11:10.386668  639611 ssh_runner.go:195] Run: crio --version
	I1124 03:11:10.419997  639611 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:11:10.421239  639611 cli_runner.go:164] Run: docker network inspect newest-cni-438041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:11:10.440078  639611 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:11:10.443982  639611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:10.455537  639611 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 03:11:10.456654  639611 kubeadm.go:884] updating cluster {Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:11:10.456815  639611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:11:10.456863  639611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:10.490472  639611 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:10.490492  639611 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:11:10.490540  639611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:10.519699  639611 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:10.519720  639611 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:11:10.519729  639611 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:11:10.519828  639611 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-438041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:10.519912  639611 ssh_runner.go:195] Run: crio config
	I1124 03:11:10.565191  639611 cni.go:84] Creating CNI manager for ""
	I1124 03:11:10.565215  639611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:10.565239  639611 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 03:11:10.565270  639611 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-438041 NodeName:newest-cni-438041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:10.565418  639611 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-438041"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:10.565482  639611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:10.573438  639611 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:11:10.573499  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:10.581224  639611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:11:10.593276  639611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:10.607346  639611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1124 03:11:10.619134  639611 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:10.622475  639611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:10.631680  639611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:10.724670  639611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:10.750283  639611 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041 for IP: 192.168.94.2
	I1124 03:11:10.750306  639611 certs.go:195] generating shared ca certs ...
	I1124 03:11:10.750339  639611 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:10.750511  639611 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:10.750555  639611 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:10.750565  639611 certs.go:257] generating profile certs ...
	I1124 03:11:10.750620  639611 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.key
	I1124 03:11:10.750633  639611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.crt with IP's: []
	I1124 03:11:10.920017  639611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.crt ...
	I1124 03:11:10.920047  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.crt: {Name:mkfd139af0a71cd4698b8ff5b3e638153eeb0dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:10.920228  639611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.key ...
	I1124 03:11:10.920243  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.key: {Name:mke75272685634ebc2912579601c6ca7cb4478b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:10.920357  639611 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183
	I1124 03:11:10.920374  639611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 03:11:11.156793  639611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183 ...
	I1124 03:11:11.156820  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183: {Name:mke55e2e412acbf5b903a8d8b4a7d2880f9fbe7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.157004  639611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183 ...
	I1124 03:11:11.157022  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183: {Name:mkad44470d73de35f2d3ae6d5e6d61417cfe11c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.157103  639611 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt
	I1124 03:11:11.157202  639611 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key
	I1124 03:11:11.157264  639611 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key
	I1124 03:11:11.157285  639611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt with IP's: []
	I1124 03:11:11.183331  639611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt ...
	I1124 03:11:11.183357  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt: {Name:mkaf061d70fce7922fd95db6d82ac8186d66239f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.183478  639611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key ...
	I1124 03:11:11.183490  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key: {Name:mk44940b01cb7f629207bffeb036b8a7e5d40814 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.183656  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:11.183693  639611 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:11.183702  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:11.183724  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:11.183746  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:11.183768  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:11.183810  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:11.184490  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:11.202414  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:11.218915  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:11.235233  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:11.251127  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:11:11.267814  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:11:11.284563  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:11.300790  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:11:11.316788  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:11.334413  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:11.350424  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:11.366533  639611 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:11.378365  639611 ssh_runner.go:195] Run: openssl version
	I1124 03:11:11.384126  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:11.391937  639611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:11.395429  639611 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:11.395475  639611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:11.428268  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:11.435958  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:11.443551  639611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:11.446861  639611 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:11.446917  639611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:11.480561  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:11.488521  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:11.496317  639611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:11.499903  639611 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:11.500486  639611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:11.534970  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:11.542760  639611 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:11.546025  639611 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:11:11.546084  639611 kubeadm.go:401] StartCluster: {Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:11.546189  639611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:11.546235  639611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:11.573079  639611 cri.go:89] found id: ""
	I1124 03:11:11.573143  639611 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:11.580989  639611 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:11:11.588193  639611 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:11:11.588243  639611 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:11:11.595578  639611 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:11:11.595596  639611 kubeadm.go:158] found existing configuration files:
	
	I1124 03:11:11.595632  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:11:11.602806  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:11:11.602846  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:11:11.609710  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:11:11.617281  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:11:11.617327  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:11:11.624606  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:11:11.631999  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:11:11.632041  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:11:11.640350  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:11:11.648359  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:11:11.648402  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:11:11.656826  639611 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:11:11.705613  639611 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:11:11.705684  639611 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:11:11.726192  639611 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:11:11.726285  639611 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:11:11.726340  639611 kubeadm.go:319] OS: Linux
	I1124 03:11:11.726397  639611 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:11:11.726461  639611 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:11:11.726524  639611 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:11:11.726587  639611 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:11:11.726686  639611 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:11:11.726790  639611 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:11:11.726861  639611 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:11:11.726943  639611 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:11:11.786505  639611 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:11:11.786613  639611 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:11:11.786747  639611 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:11:11.794629  639611 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1124 03:11:08.757098  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	W1124 03:11:10.757264  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	I1124 03:11:11.798699  639611 out.go:252]   - Generating certificates and keys ...
	I1124 03:11:11.798797  639611 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:11:11.798912  639611 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:11:11.963263  639611 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:11:12.107595  639611 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:11:07.983375  631782 out.go:252]   - Generating certificates and keys ...
	I1124 03:11:07.983499  631782 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:11:07.983606  631782 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:11:09.010428  631782 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:11:09.257194  631782 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:11:09.494535  631782 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:11:09.716956  631782 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:11:09.775865  631782 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:11:09.776099  631782 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-603010] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:11:10.030969  631782 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:11:10.031162  631782 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-603010] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:11:10.290289  631782 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:11:10.445776  631782 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:11:10.719700  631782 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:11:10.719788  631782 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:11:10.954056  631782 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:11:11.224490  631782 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:11:11.470938  631782 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:11:11.927378  631782 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:11:12.303932  631782 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:11:12.304513  631782 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:11:12.307975  631782 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:11:12.309284  631782 out.go:252]   - Booting up control plane ...
	I1124 03:11:12.309381  631782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:11:12.309465  631782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:11:12.310009  631782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:11:12.339837  631782 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:11:12.340003  631782 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:11:12.347388  631782 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:11:12.347620  631782 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:11:12.347698  631782 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:11:12.466844  631782 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:11:12.466970  631782 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:11:12.233009  639611 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:11:12.451335  639611 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:11:12.593355  639611 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:11:12.593574  639611 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-438041] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:11:13.275810  639611 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:11:13.276017  639611 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-438041] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:11:14.145354  639611 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:11:14.614138  639611 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:11:14.941086  639611 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:11:14.941227  639611 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:11:15.058919  639611 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:11:15.267378  639611 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:11:15.939232  639611 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:11:16.257592  639611 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:11:16.635822  639611 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:11:16.636485  639611 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:11:16.640110  639611 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1124 03:11:13.256972  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	W1124 03:11:15.259252  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	I1124 03:11:12.968700  631782 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.726277ms
	I1124 03:11:12.972359  631782 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:11:12.972498  631782 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 03:11:12.972634  631782 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:11:12.972778  631782 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:11:15.168823  631782 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.194903045s
	I1124 03:11:15.395212  631782 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.422782586s
	I1124 03:11:16.974533  631782 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002117874s
	I1124 03:11:16.990327  631782 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:11:17.001157  631782 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:11:17.009558  631782 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:11:17.009832  631782 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-603010 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:11:17.017079  631782 kubeadm.go:319] [bootstrap-token] Using token: qixyjy.v1lkfw8d9c2mcnrf
	I1124 03:11:16.641561  639611 out.go:252]   - Booting up control plane ...
	I1124 03:11:16.641675  639611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:11:16.641789  639611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:11:16.642679  639611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:11:16.660968  639611 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:11:16.661101  639611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:11:16.668686  639611 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:11:16.669004  639611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:11:16.669064  639611 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:11:16.793748  639611 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:11:16.793925  639611 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:11:17.712301  636397 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:11:17.712380  636397 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:11:17.712515  636397 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:11:17.712609  636397 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:11:17.712667  636397 kubeadm.go:319] OS: Linux
	I1124 03:11:17.712717  636397 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:11:17.712772  636397 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:11:17.712846  636397 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:11:17.712998  636397 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:11:17.713081  636397 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:11:17.713158  636397 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:11:17.713228  636397 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:11:17.713298  636397 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:11:17.713410  636397 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:11:17.713559  636397 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:11:17.713706  636397 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:11:17.713767  636397 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:11:17.715195  636397 out.go:252]   - Generating certificates and keys ...
	I1124 03:11:17.715298  636397 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:11:17.715442  636397 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:11:17.715523  636397 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:11:17.715597  636397 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:11:17.715657  636397 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:11:17.715733  636397 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:11:17.715822  636397 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:11:17.716053  636397 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-993813 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:11:17.716134  636397 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:11:17.716334  636397 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-993813 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:11:17.716443  636397 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:11:17.716537  636397 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:11:17.716600  636397 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:11:17.716682  636397 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:11:17.716772  636397 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:11:17.716823  636397 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:11:17.716938  636397 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:11:17.717053  636397 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:11:17.717141  636397 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:11:17.717221  636397 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:11:17.717295  636397 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:11:17.718959  636397 out.go:252]   - Booting up control plane ...
	I1124 03:11:17.719049  636397 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:11:17.719135  636397 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:11:17.719219  636397 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:11:17.719341  636397 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:11:17.719462  636397 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:11:17.719560  636397 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:11:17.719632  636397 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:11:17.719681  636397 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:11:17.719830  636397 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:11:17.719976  636397 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:11:17.720049  636397 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501467711s
	I1124 03:11:17.720160  636397 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:11:17.720268  636397 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1124 03:11:17.720406  636397 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:11:17.720513  636397 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:11:17.720614  636397 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.599087563s
	I1124 03:11:17.720742  636397 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.501028525s
	I1124 03:11:17.720844  636397 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00179766s
	I1124 03:11:17.721018  636397 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:11:17.721192  636397 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:11:17.721298  636397 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:11:17.721558  636397 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-993813 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:11:17.721622  636397 kubeadm.go:319] [bootstrap-token] Using token: q5wdgj.p9bwnkl5amhf01kb
	I1124 03:11:17.722776  636397 out.go:252]   - Configuring RBAC rules ...
	I1124 03:11:17.722949  636397 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:11:17.723089  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:11:17.723273  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:11:17.723470  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:11:17.723636  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:11:17.723759  636397 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:11:17.723924  636397 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:11:17.723997  636397 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:11:17.724057  636397 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:11:17.724062  636397 kubeadm.go:319] 
	I1124 03:11:17.724140  636397 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:11:17.724145  636397 kubeadm.go:319] 
	I1124 03:11:17.724249  636397 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:11:17.724254  636397 kubeadm.go:319] 
	I1124 03:11:17.724288  636397 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:11:17.724365  636397 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:11:17.724429  636397 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:11:17.724434  636397 kubeadm.go:319] 
	I1124 03:11:17.724504  636397 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:11:17.724509  636397 kubeadm.go:319] 
	I1124 03:11:17.724570  636397 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:11:17.724576  636397 kubeadm.go:319] 
	I1124 03:11:17.724642  636397 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:11:17.724751  636397 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:11:17.724845  636397 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:11:17.724850  636397 kubeadm.go:319] 
	I1124 03:11:17.724962  636397 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:11:17.725053  636397 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:11:17.725058  636397 kubeadm.go:319] 
	I1124 03:11:17.725156  636397 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token q5wdgj.p9bwnkl5amhf01kb \
	I1124 03:11:17.725281  636397 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:11:17.725306  636397 kubeadm.go:319] 	--control-plane 
	I1124 03:11:17.725311  636397 kubeadm.go:319] 
	I1124 03:11:17.725412  636397 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:11:17.725417  636397 kubeadm.go:319] 
	I1124 03:11:17.725515  636397 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token q5wdgj.p9bwnkl5amhf01kb \
	I1124 03:11:17.725654  636397 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:11:17.725664  636397 cni.go:84] Creating CNI manager for ""
	I1124 03:11:17.725672  636397 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:17.727357  636397 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:11:17.018572  631782 out.go:252]   - Configuring RBAC rules ...
	I1124 03:11:17.018732  631782 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:11:17.021245  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:11:17.025919  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:11:17.028242  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:11:17.030590  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:11:17.032723  631782 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:11:17.380197  631782 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:11:17.802727  631782 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:11:18.381075  631782 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:11:18.382320  631782 kubeadm.go:319] 
	I1124 03:11:18.382408  631782 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:11:18.382416  631782 kubeadm.go:319] 
	I1124 03:11:18.382508  631782 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:11:18.382522  631782 kubeadm.go:319] 
	I1124 03:11:18.382554  631782 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:11:18.382630  631782 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:11:18.382704  631782 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:11:18.382712  631782 kubeadm.go:319] 
	I1124 03:11:18.382781  631782 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:11:18.382791  631782 kubeadm.go:319] 
	I1124 03:11:18.382850  631782 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:11:18.382859  631782 kubeadm.go:319] 
	I1124 03:11:18.382948  631782 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:11:18.383059  631782 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:11:18.383153  631782 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:11:18.383164  631782 kubeadm.go:319] 
	I1124 03:11:18.383265  631782 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:11:18.383360  631782 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:11:18.383370  631782 kubeadm.go:319] 
	I1124 03:11:18.383510  631782 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qixyjy.v1lkfw8d9c2mcnrf \
	I1124 03:11:18.383708  631782 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:11:18.383747  631782 kubeadm.go:319] 	--control-plane 
	I1124 03:11:18.383767  631782 kubeadm.go:319] 
	I1124 03:11:18.383880  631782 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:11:18.383909  631782 kubeadm.go:319] 
	I1124 03:11:18.384037  631782 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qixyjy.v1lkfw8d9c2mcnrf \
	I1124 03:11:18.384180  631782 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:11:18.387182  631782 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:11:18.387348  631782 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:11:18.387386  631782 cni.go:84] Creating CNI manager for ""
	I1124 03:11:18.387399  631782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:18.389706  631782 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:11:17.729080  636397 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:11:17.735280  636397 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:11:17.735299  636397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:11:17.750224  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:11:17.964488  636397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:11:17.964571  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:17.964583  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-993813 minikube.k8s.io/updated_at=2025_11_24T03_11_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=default-k8s-diff-port-993813 minikube.k8s.io/primary=true
	I1124 03:11:17.977541  636397 ops.go:34] apiserver oom_adj: -16
	I1124 03:11:18.089531  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:18.589931  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:17.757544  623347 node_ready.go:49] node "old-k8s-version-579951" is "Ready"
	I1124 03:11:17.757568  623347 node_ready.go:38] duration metric: took 13.503706583s for node "old-k8s-version-579951" to be "Ready" ...
	I1124 03:11:17.757591  623347 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:17.757632  623347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:11:17.769351  623347 api_server.go:72] duration metric: took 13.944624755s to wait for apiserver process to appear ...
	I1124 03:11:17.769381  623347 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:17.769404  623347 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 03:11:17.773486  623347 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 03:11:17.774606  623347 api_server.go:141] control plane version: v1.28.0
	I1124 03:11:17.774639  623347 api_server.go:131] duration metric: took 5.249615ms to wait for apiserver health ...
	I1124 03:11:17.774650  623347 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:17.778732  623347 system_pods.go:59] 8 kube-system pods found
	I1124 03:11:17.778769  623347 system_pods.go:61] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:17.778779  623347 system_pods.go:61] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:17.778787  623347 system_pods.go:61] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:17.778792  623347 system_pods.go:61] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:17.778797  623347 system_pods.go:61] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:17.778806  623347 system_pods.go:61] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:17.778810  623347 system_pods.go:61] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:17.778817  623347 system_pods.go:61] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:17.778824  623347 system_pods.go:74] duration metric: took 4.167214ms to wait for pod list to return data ...
	I1124 03:11:17.778835  623347 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:17.781411  623347 default_sa.go:45] found service account: "default"
	I1124 03:11:17.781435  623347 default_sa.go:55] duration metric: took 2.594162ms for default service account to be created ...
	I1124 03:11:17.781446  623347 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:11:17.784981  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:17.785018  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:17.785031  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:17.785044  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:17.785050  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:17.785061  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:17.785066  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:17.785076  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:17.785090  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:17.785127  623347 retry.go:31] will retry after 271.484184ms: missing components: kube-dns
	I1124 03:11:18.065194  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:18.065237  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:18.065248  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:18.065257  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:18.065263  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:18.065269  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:18.065274  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:18.065279  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:18.065287  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:18.065306  623347 retry.go:31] will retry after 388.018904ms: missing components: kube-dns
	I1124 03:11:18.457864  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:18.457936  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:18.457946  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:18.457961  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:18.457972  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:18.457978  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:18.457984  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:18.457991  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:18.457999  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:18.458022  623347 retry.go:31] will retry after 449.601826ms: missing components: kube-dns
	I1124 03:11:18.911831  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:18.911859  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Running
	I1124 03:11:18.911865  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:18.911869  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:18.911873  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:18.911877  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:18.911880  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:18.911916  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:18.911921  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Running
	I1124 03:11:18.911931  623347 system_pods.go:126] duration metric: took 1.130477915s to wait for k8s-apps to be running ...
	I1124 03:11:18.911944  623347 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:11:18.911996  623347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:11:18.925774  623347 system_svc.go:56] duration metric: took 13.819357ms WaitForService to wait for kubelet
	I1124 03:11:18.925804  623347 kubeadm.go:587] duration metric: took 15.101081639s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:11:18.925827  623347 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:18.928599  623347 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:18.928633  623347 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:18.928652  623347 node_conditions.go:105] duration metric: took 2.818338ms to run NodePressure ...
	I1124 03:11:18.928667  623347 start.go:242] waiting for startup goroutines ...
	I1124 03:11:18.928681  623347 start.go:247] waiting for cluster config update ...
	I1124 03:11:18.928701  623347 start.go:256] writing updated cluster config ...
	I1124 03:11:18.929049  623347 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:18.933285  623347 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:18.937686  623347 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.946299  623347 pod_ready.go:94] pod "coredns-5dd5756b68-5nwx9" is "Ready"
	I1124 03:11:18.946320  623347 pod_ready.go:86] duration metric: took 8.611977ms for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.950801  623347 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.960988  623347 pod_ready.go:94] pod "etcd-old-k8s-version-579951" is "Ready"
	I1124 03:11:18.961015  623347 pod_ready.go:86] duration metric: took 10.19455ms for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.965881  623347 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.974882  623347 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-579951" is "Ready"
	I1124 03:11:18.974933  623347 pod_ready.go:86] duration metric: took 9.016779ms for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.977770  623347 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:19.341020  623347 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-579951" is "Ready"
	I1124 03:11:19.341052  623347 pod_ready.go:86] duration metric: took 363.250058ms for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:19.538869  623347 pod_ready.go:83] waiting for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:19.937877  623347 pod_ready.go:94] pod "kube-proxy-r82jh" is "Ready"
	I1124 03:11:19.937925  623347 pod_ready.go:86] duration metric: took 399.001292ms for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:20.140275  623347 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:20.537761  623347 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-579951" is "Ready"
	I1124 03:11:20.537795  623347 pod_ready.go:86] duration metric: took 397.491187ms for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:20.537812  623347 pod_ready.go:40] duration metric: took 1.604492738s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:20.582109  623347 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 03:11:20.583699  623347 out.go:203] 
	W1124 03:11:20.584752  623347 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 03:11:20.585796  623347 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:11:20.587217  623347 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-579951" cluster and "default" namespace by default
	I1124 03:11:17.795245  639611 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001564938s
	I1124 03:11:17.799260  639611 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:11:17.799423  639611 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1124 03:11:17.799562  639611 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:11:17.799651  639611 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:11:20.070827  639611 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.271449475s
	I1124 03:11:20.290602  639611 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.491348646s
	I1124 03:11:21.801475  639611 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002149825s
	I1124 03:11:21.812595  639611 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:11:21.822553  639611 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:11:21.831169  639611 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:11:21.831446  639611 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-438041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:11:21.841628  639611 kubeadm.go:319] [bootstrap-token] Using token: yx8fea.c13myzzt6w383nef
	I1124 03:11:21.842995  639611 out.go:252]   - Configuring RBAC rules ...
	I1124 03:11:21.843145  639611 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:11:21.846076  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:11:21.851007  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:11:21.853367  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:11:21.856222  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:11:21.859271  639611 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:11:19.090574  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:19.589602  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.090576  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.590533  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.089866  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.589593  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.089582  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.590222  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.673854  636397 kubeadm.go:1114] duration metric: took 4.709348594s to wait for elevateKubeSystemPrivileges
	I1124 03:11:22.673908  636397 kubeadm.go:403] duration metric: took 16.63377865s to StartCluster
	I1124 03:11:22.673934  636397 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:22.674008  636397 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:22.675076  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:22.675302  636397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:22.675326  636397 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:22.675390  636397 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-993813"
	I1124 03:11:22.675304  636397 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:22.675418  636397 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-993813"
	I1124 03:11:22.675431  636397 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993813"
	I1124 03:11:22.675411  636397 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-993813"
	I1124 03:11:22.675530  636397 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:11:22.675536  636397 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:22.675814  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:11:22.676034  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:11:22.676852  636397 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:22.678754  636397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:22.703150  636397 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-993813"
	I1124 03:11:22.703198  636397 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:11:22.703676  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:11:22.704736  636397 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:18.390820  631782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:11:18.395615  631782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:11:18.395633  631782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:11:18.409234  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:11:18.710608  631782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:11:18.710754  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:18.710853  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-603010 minikube.k8s.io/updated_at=2025_11_24T03_11_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=no-preload-603010 minikube.k8s.io/primary=true
	I1124 03:11:18.818373  631782 ops.go:34] apiserver oom_adj: -16
	I1124 03:11:18.818465  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:19.318531  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:19.819135  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.319402  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.819441  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.319189  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.818604  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.319077  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.706096  636397 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:22.706117  636397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:22.706176  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:22.737283  636397 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:22.737304  636397 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:22.737370  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:22.740863  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:22.761473  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:22.778645  636397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:22.830555  636397 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:22.862561  636397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:22.876089  636397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:22.963053  636397 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:22.964307  636397 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:11:23.185636  636397 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:22.209953  639611 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:11:22.623609  639611 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:11:23.207075  639611 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:11:23.208086  639611 kubeadm.go:319] 
	I1124 03:11:23.208184  639611 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:11:23.208202  639611 kubeadm.go:319] 
	I1124 03:11:23.208296  639611 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:11:23.208304  639611 kubeadm.go:319] 
	I1124 03:11:23.208344  639611 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:11:23.208443  639611 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:11:23.208509  639611 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:11:23.208519  639611 kubeadm.go:319] 
	I1124 03:11:23.208591  639611 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:11:23.208601  639611 kubeadm.go:319] 
	I1124 03:11:23.208661  639611 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:11:23.208671  639611 kubeadm.go:319] 
	I1124 03:11:23.208771  639611 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:11:23.208934  639611 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:11:23.209014  639611 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:11:23.209021  639611 kubeadm.go:319] 
	I1124 03:11:23.209090  639611 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:11:23.209153  639611 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:11:23.209159  639611 kubeadm.go:319] 
	I1124 03:11:23.209225  639611 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yx8fea.c13myzzt6w383nef \
	I1124 03:11:23.209329  639611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:11:23.209368  639611 kubeadm.go:319] 	--control-plane 
	I1124 03:11:23.209382  639611 kubeadm.go:319] 
	I1124 03:11:23.209513  639611 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:11:23.209523  639611 kubeadm.go:319] 
	I1124 03:11:23.209667  639611 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yx8fea.c13myzzt6w383nef \
	I1124 03:11:23.209795  639611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:11:23.212372  639611 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:11:23.212472  639611 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:11:23.212489  639611 cni.go:84] Creating CNI manager for ""
	I1124 03:11:23.212498  639611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:23.213669  639611 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:11:22.819290  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.318726  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.413238  631782 kubeadm.go:1114] duration metric: took 4.702498844s to wait for elevateKubeSystemPrivileges
	I1124 03:11:23.413274  631782 kubeadm.go:403] duration metric: took 15.686211393s to StartCluster
	I1124 03:11:23.413298  631782 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:23.413374  631782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:23.415097  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:23.415455  631782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:23.415991  631782 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:23.416200  631782 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:23.416393  631782 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:23.416478  631782 addons.go:70] Setting storage-provisioner=true in profile "no-preload-603010"
	I1124 03:11:23.416515  631782 addons.go:239] Setting addon storage-provisioner=true in "no-preload-603010"
	I1124 03:11:23.416545  631782 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:11:23.416771  631782 addons.go:70] Setting default-storageclass=true in profile "no-preload-603010"
	I1124 03:11:23.416794  631782 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603010"
	I1124 03:11:23.417522  631782 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:11:23.418922  631782 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:11:23.420690  631782 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:23.422440  631782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:23.453170  631782 addons.go:239] Setting addon default-storageclass=true in "no-preload-603010"
	I1124 03:11:23.453315  631782 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:11:23.454249  631782 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:11:23.456721  631782 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:23.187200  636397 addons.go:530] duration metric: took 511.871879ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:23.468811  636397 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-993813" context rescaled to 1 replicas
	I1124 03:11:23.457832  631782 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:23.457852  631782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:23.457945  631782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:11:23.485040  631782 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:23.485073  631782 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:23.485135  631782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:11:23.488649  631782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:11:23.522776  631782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:11:23.578154  631782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:23.637057  631782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:23.642323  631782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:23.675165  631782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:23.795763  631782 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:23.982706  631782 node_ready.go:35] waiting up to 6m0s for node "no-preload-603010" to be "Ready" ...
	I1124 03:11:23.988365  631782 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:23.214606  639611 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:11:23.218969  639611 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:11:23.219002  639611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:11:23.233030  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:11:23.530587  639611 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:11:23.530753  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.530907  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-438041 minikube.k8s.io/updated_at=2025_11_24T03_11_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=newest-cni-438041 minikube.k8s.io/primary=true
	I1124 03:11:23.553306  639611 ops.go:34] apiserver oom_adj: -16
	I1124 03:11:23.638819  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:24.139560  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:24.639641  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:25.139273  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:25.638941  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:26.139461  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:26.638988  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.989407  631782 addons.go:530] duration metric: took 573.023057ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:24.300916  631782 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-603010" context rescaled to 1 replicas
	W1124 03:11:25.985432  631782 node_ready.go:57] node "no-preload-603010" has "Ready":"False" status (will retry)
	I1124 03:11:27.139734  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:27.639015  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:28.139551  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:28.207738  639611 kubeadm.go:1114] duration metric: took 4.677029552s to wait for elevateKubeSystemPrivileges
	I1124 03:11:28.207780  639611 kubeadm.go:403] duration metric: took 16.661698302s to StartCluster
	I1124 03:11:28.207804  639611 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:28.207878  639611 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:28.209479  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:28.209719  639611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:28.209737  639611 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:28.209814  639611 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:28.209929  639611 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-438041"
	I1124 03:11:28.209946  639611 addons.go:70] Setting default-storageclass=true in profile "newest-cni-438041"
	I1124 03:11:28.209971  639611 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-438041"
	I1124 03:11:28.209980  639611 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-438041"
	I1124 03:11:28.210010  639611 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:11:28.210056  639611 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:28.210387  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:28.210537  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:28.211106  639611 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:28.212323  639611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:28.233230  639611 addons.go:239] Setting addon default-storageclass=true in "newest-cni-438041"
	I1124 03:11:28.233278  639611 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:11:28.233850  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:28.234771  639611 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:28.235819  639611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:28.235861  639611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:28.235962  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:28.261133  639611 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:28.261156  639611 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:28.261334  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:28.267999  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:28.289398  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:28.299784  639611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:28.359817  639611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:28.384919  639611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:28.404504  639611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:28.491961  639611 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:28.493110  639611 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:28.493157  639611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1124 03:11:28.510848  639611 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "newest-cni-438041" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1124 03:11:28.510875  639611 start.go:161] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1124 03:11:28.701114  639611 api_server.go:72] duration metric: took 491.340672ms to wait for apiserver process to appear ...
	I1124 03:11:28.701143  639611 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:28.701166  639611 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:11:28.705994  639611 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:11:28.706754  639611 api_server.go:141] control plane version: v1.34.1
	I1124 03:11:28.706781  639611 api_server.go:131] duration metric: took 5.630796ms to wait for apiserver health ...
	I1124 03:11:28.706793  639611 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:28.709054  639611 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:28.709369  639611 system_pods.go:59] 9 kube-system pods found
	I1124 03:11:28.709395  639611 system_pods.go:61] "coredns-66bc5c9577-b5rlp" [ec3ad010-7694-4640-9638-fe6f5c97f56a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:28.709402  639611 system_pods.go:61] "coredns-66bc5c9577-mwvq8" [c8831e7f-34c0-40c7-a728-7f7882ed604a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:28.709411  639611 system_pods.go:61] "etcd-newest-cni-438041" [7acbb753-dfd2-4438-b370-a7e38c4fbc5d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:11:28.709418  639611 system_pods.go:61] "kindnet-xp46p" [19fa7668-24bd-454c-a5df-37534a06d3a5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:11:28.709423  639611 system_pods.go:61] "kube-apiserver-newest-cni-438041" [c7d90375-f6c0-4a1f-8b80-81574119b191] Running
	I1124 03:11:28.709432  639611 system_pods.go:61] "kube-controller-manager-newest-cni-438041" [54b144f6-6f26-4e9b-818b-cbb2d7b4c0a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:11:28.709437  639611 system_pods.go:61] "kube-proxy-n85pg" [86f875e2-7efc-4b60-b031-a1de71ea7502] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:11:28.709447  639611 system_pods.go:61] "kube-scheduler-newest-cni-438041" [75e99a3a-d4a9-4428-a52a-ef5ac4edc76c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:11:28.709457  639611 system_pods.go:61] "storage-provisioner" [9a94c2f7-e288-4528-b22c-f413d79bdf46] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:28.709467  639611 system_pods.go:74] duration metric: took 2.667768ms to wait for pod list to return data ...
	I1124 03:11:28.709481  639611 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:28.710153  639611 addons.go:530] duration metric: took 500.34824ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:28.711298  639611 default_sa.go:45] found service account: "default"
	I1124 03:11:28.711317  639611 default_sa.go:55] duration metric: took 1.826862ms for default service account to be created ...
	I1124 03:11:28.711328  639611 kubeadm.go:587] duration metric: took 501.561139ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 03:11:28.711341  639611 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:28.713171  639611 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:28.713192  639611 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:28.713206  639611 node_conditions.go:105] duration metric: took 1.86027ms to run NodePressure ...
	I1124 03:11:28.713217  639611 start.go:242] waiting for startup goroutines ...
	I1124 03:11:28.713224  639611 start.go:247] waiting for cluster config update ...
	I1124 03:11:28.713233  639611 start.go:256] writing updated cluster config ...
	I1124 03:11:28.713443  639611 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:28.759550  639611 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:11:28.760722  639611 out.go:179] * Done! kubectl is now configured to use "newest-cni-438041" cluster and "default" namespace by default
	W1124 03:11:24.968153  636397 node_ready.go:57] node "default-k8s-diff-port-993813" has "Ready":"False" status (will retry)
	W1124 03:11:27.467212  636397 node_ready.go:57] node "default-k8s-diff-port-993813" has "Ready":"False" status (will retry)
	W1124 03:11:27.985481  631782 node_ready.go:57] node "no-preload-603010" has "Ready":"False" status (will retry)
	W1124 03:11:29.986348  631782 node_ready.go:57] node "no-preload-603010" has "Ready":"False" status (will retry)
	W1124 03:11:32.485831  631782 node_ready.go:57] node "no-preload-603010" has "Ready":"False" status (will retry)
	W1124 03:11:29.468262  636397 node_ready.go:57] node "default-k8s-diff-port-993813" has "Ready":"False" status (will retry)
	W1124 03:11:31.967715  636397 node_ready.go:57] node "default-k8s-diff-port-993813" has "Ready":"False" status (will retry)
	I1124 03:11:34.466994  636397 node_ready.go:49] node "default-k8s-diff-port-993813" is "Ready"
	I1124 03:11:34.467022  636397 node_ready.go:38] duration metric: took 11.502672577s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:11:34.467035  636397 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:34.467110  636397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:11:34.478740  636397 api_server.go:72] duration metric: took 11.803297035s to wait for apiserver process to appear ...
	I1124 03:11:34.478765  636397 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:34.478786  636397 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:11:34.483466  636397 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1124 03:11:34.484430  636397 api_server.go:141] control plane version: v1.34.1
	I1124 03:11:34.484461  636397 api_server.go:131] duration metric: took 5.687933ms to wait for apiserver health ...
	I1124 03:11:34.484472  636397 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:34.487442  636397 system_pods.go:59] 8 kube-system pods found
	I1124 03:11:34.487468  636397 system_pods.go:61] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:34.487474  636397 system_pods.go:61] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running
	I1124 03:11:34.487480  636397 system_pods.go:61] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running
	I1124 03:11:34.487484  636397 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running
	I1124 03:11:34.487488  636397 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running
	I1124 03:11:34.487492  636397 system_pods.go:61] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running
	I1124 03:11:34.487495  636397 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running
	I1124 03:11:34.487504  636397 system_pods.go:61] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:34.487510  636397 system_pods.go:74] duration metric: took 3.032367ms to wait for pod list to return data ...
	I1124 03:11:34.487519  636397 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:34.489541  636397 default_sa.go:45] found service account: "default"
	I1124 03:11:34.489559  636397 default_sa.go:55] duration metric: took 2.034688ms for default service account to be created ...
	I1124 03:11:34.489572  636397 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:11:34.492558  636397 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:34.492600  636397 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:34.492617  636397 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running
	I1124 03:11:34.492626  636397 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running
	I1124 03:11:34.492632  636397 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running
	I1124 03:11:34.492642  636397 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running
	I1124 03:11:34.492652  636397 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running
	I1124 03:11:34.492658  636397 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running
	I1124 03:11:34.492665  636397 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:34.492694  636397 retry.go:31] will retry after 200.05639ms: missing components: kube-dns
	I1124 03:11:34.696030  636397 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:34.696063  636397 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:34.696069  636397 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running
	I1124 03:11:34.696077  636397 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running
	I1124 03:11:34.696080  636397 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running
	I1124 03:11:34.696083  636397 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running
	I1124 03:11:34.696087  636397 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running
	I1124 03:11:34.696090  636397 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running
	I1124 03:11:34.696095  636397 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:34.696110  636397 retry.go:31] will retry after 280.398371ms: missing components: kube-dns
	I1124 03:11:34.980332  636397 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:34.980374  636397 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:34.980383  636397 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running
	I1124 03:11:34.980392  636397 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running
	I1124 03:11:34.980397  636397 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running
	I1124 03:11:34.980403  636397 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running
	I1124 03:11:34.980408  636397 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running
	I1124 03:11:34.980414  636397 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running
	I1124 03:11:34.980422  636397 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:34.980448  636397 retry.go:31] will retry after 395.954624ms: missing components: kube-dns
	I1124 03:11:35.380496  636397 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:35.380531  636397 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running
	I1124 03:11:35.380539  636397 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running
	I1124 03:11:35.380546  636397 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running
	I1124 03:11:35.380552  636397 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running
	I1124 03:11:35.380558  636397 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running
	I1124 03:11:35.380562  636397 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running
	I1124 03:11:35.380567  636397 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running
	I1124 03:11:35.380572  636397 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running
	I1124 03:11:35.380583  636397 system_pods.go:126] duration metric: took 891.004931ms to wait for k8s-apps to be running ...
	I1124 03:11:35.380597  636397 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:11:35.380649  636397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:11:35.393441  636397 system_svc.go:56] duration metric: took 12.837679ms WaitForService to wait for kubelet
	I1124 03:11:35.393466  636397 kubeadm.go:587] duration metric: took 12.71802981s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:11:35.393481  636397 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:35.395811  636397 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:35.395841  636397 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:35.395858  636397 node_conditions.go:105] duration metric: took 2.372506ms to run NodePressure ...
	I1124 03:11:35.395872  636397 start.go:242] waiting for startup goroutines ...
	I1124 03:11:35.395895  636397 start.go:247] waiting for cluster config update ...
	I1124 03:11:35.395910  636397 start.go:256] writing updated cluster config ...
	I1124 03:11:35.396213  636397 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:35.399585  636397 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:35.402632  636397 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:35.406208  636397 pod_ready.go:94] pod "coredns-66bc5c9577-w62hm" is "Ready"
	I1124 03:11:35.406238  636397 pod_ready.go:86] duration metric: took 3.573858ms for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:35.407851  636397 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:35.411135  636397 pod_ready.go:94] pod "etcd-default-k8s-diff-port-993813" is "Ready"
	I1124 03:11:35.411153  636397 pod_ready.go:86] duration metric: took 3.282775ms for pod "etcd-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:35.412766  636397 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:35.416462  636397 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-993813" is "Ready"
	I1124 03:11:35.416483  636397 pod_ready.go:86] duration metric: took 3.700448ms for pod "kube-apiserver-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:35.418174  636397 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:35.803236  636397 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-993813" is "Ready"
	I1124 03:11:35.803266  636397 pod_ready.go:86] duration metric: took 385.06776ms for pod "kube-controller-manager-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.003469  636397 pod_ready.go:83] waiting for pod "kube-proxy-xgjzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.402595  636397 pod_ready.go:94] pod "kube-proxy-xgjzs" is "Ready"
	I1124 03:11:36.402619  636397 pod_ready.go:86] duration metric: took 399.12563ms for pod "kube-proxy-xgjzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.604639  636397 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:37.003065  636397 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-993813" is "Ready"
	I1124 03:11:37.003089  636397 pod_ready.go:86] duration metric: took 398.428767ms for pod "kube-scheduler-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:37.003101  636397 pod_ready.go:40] duration metric: took 1.603482207s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:37.046307  636397 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:11:37.047979  636397 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-993813" cluster and "default" namespace by default
	W1124 03:11:34.486500  631782 node_ready.go:57] node "no-preload-603010" has "Ready":"False" status (will retry)
	I1124 03:11:36.485217  631782 node_ready.go:49] node "no-preload-603010" is "Ready"
	I1124 03:11:36.485247  631782 node_ready.go:38] duration metric: took 12.502511597s for node "no-preload-603010" to be "Ready" ...
	I1124 03:11:36.485264  631782 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:36.485315  631782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:11:36.496780  631782 api_server.go:72] duration metric: took 13.080544347s to wait for apiserver process to appear ...
	I1124 03:11:36.496802  631782 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:36.496819  631782 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:11:36.500722  631782 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:11:36.501639  631782 api_server.go:141] control plane version: v1.34.1
	I1124 03:11:36.501668  631782 api_server.go:131] duration metric: took 4.859943ms to wait for apiserver health ...
	I1124 03:11:36.501676  631782 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:36.504265  631782 system_pods.go:59] 8 kube-system pods found
	I1124 03:11:36.504312  631782 system_pods.go:61] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:36.504327  631782 system_pods.go:61] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running
	I1124 03:11:36.504340  631782 system_pods.go:61] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running
	I1124 03:11:36.504349  631782 system_pods.go:61] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running
	I1124 03:11:36.504357  631782 system_pods.go:61] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running
	I1124 03:11:36.504365  631782 system_pods.go:61] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running
	I1124 03:11:36.504371  631782 system_pods.go:61] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running
	I1124 03:11:36.504383  631782 system_pods.go:61] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:36.504394  631782 system_pods.go:74] duration metric: took 2.710904ms to wait for pod list to return data ...
	I1124 03:11:36.504406  631782 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:36.506380  631782 default_sa.go:45] found service account: "default"
	I1124 03:11:36.506397  631782 default_sa.go:55] duration metric: took 1.983667ms for default service account to be created ...
	I1124 03:11:36.506407  631782 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:11:36.508530  631782 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:36.508552  631782 system_pods.go:89] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:36.508557  631782 system_pods.go:89] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running
	I1124 03:11:36.508563  631782 system_pods.go:89] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running
	I1124 03:11:36.508567  631782 system_pods.go:89] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running
	I1124 03:11:36.508570  631782 system_pods.go:89] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running
	I1124 03:11:36.508574  631782 system_pods.go:89] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running
	I1124 03:11:36.508577  631782 system_pods.go:89] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running
	I1124 03:11:36.508583  631782 system_pods.go:89] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:36.508608  631782 retry.go:31] will retry after 237.617737ms: missing components: kube-dns
	I1124 03:11:36.749857  631782 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:36.749896  631782 system_pods.go:89] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running
	I1124 03:11:36.749902  631782 system_pods.go:89] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running
	I1124 03:11:36.749906  631782 system_pods.go:89] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running
	I1124 03:11:36.749909  631782 system_pods.go:89] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running
	I1124 03:11:36.749913  631782 system_pods.go:89] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running
	I1124 03:11:36.749916  631782 system_pods.go:89] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running
	I1124 03:11:36.749919  631782 system_pods.go:89] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running
	I1124 03:11:36.749922  631782 system_pods.go:89] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running
	I1124 03:11:36.749930  631782 system_pods.go:126] duration metric: took 243.517252ms to wait for k8s-apps to be running ...
	I1124 03:11:36.749940  631782 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:11:36.749989  631782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:11:36.763009  631782 system_svc.go:56] duration metric: took 13.057269ms WaitForService to wait for kubelet
	I1124 03:11:36.763038  631782 kubeadm.go:587] duration metric: took 13.346804489s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:11:36.763061  631782 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:36.765293  631782 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:36.765318  631782 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:36.765332  631782 node_conditions.go:105] duration metric: took 2.266082ms to run NodePressure ...
	I1124 03:11:36.765346  631782 start.go:242] waiting for startup goroutines ...
	I1124 03:11:36.765353  631782 start.go:247] waiting for cluster config update ...
	I1124 03:11:36.765363  631782 start.go:256] writing updated cluster config ...
	I1124 03:11:36.765588  631782 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:36.769133  631782 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:36.772057  631782 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.775398  631782 pod_ready.go:94] pod "coredns-66bc5c9577-9n5xf" is "Ready"
	I1124 03:11:36.775416  631782 pod_ready.go:86] duration metric: took 3.34099ms for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.777223  631782 pod_ready.go:83] waiting for pod "etcd-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.780532  631782 pod_ready.go:94] pod "etcd-no-preload-603010" is "Ready"
	I1124 03:11:36.780549  631782 pod_ready.go:86] duration metric: took 3.305626ms for pod "etcd-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.782225  631782 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.785532  631782 pod_ready.go:94] pod "kube-apiserver-no-preload-603010" is "Ready"
	I1124 03:11:36.785548  631782 pod_ready.go:86] duration metric: took 3.304015ms for pod "kube-apiserver-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.787228  631782 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:37.173629  631782 pod_ready.go:94] pod "kube-controller-manager-no-preload-603010" is "Ready"
	I1124 03:11:37.173655  631782 pod_ready.go:86] duration metric: took 386.410612ms for pod "kube-controller-manager-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:37.374053  631782 pod_ready.go:83] waiting for pod "kube-proxy-swj6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:37.772793  631782 pod_ready.go:94] pod "kube-proxy-swj6c" is "Ready"
	I1124 03:11:37.772822  631782 pod_ready.go:86] duration metric: took 398.744991ms for pod "kube-proxy-swj6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:37.972562  631782 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:38.373638  631782 pod_ready.go:94] pod "kube-scheduler-no-preload-603010" is "Ready"
	I1124 03:11:38.373665  631782 pod_ready.go:86] duration metric: took 401.078498ms for pod "kube-scheduler-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:38.373676  631782 pod_ready.go:40] duration metric: took 1.604514204s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:38.416726  631782 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:11:38.418110  631782 out.go:179] * Done! kubectl is now configured to use "no-preload-603010" cluster and "default" namespace by default
	W1124 03:11:38.424014  631782 root.go:91] failed to log command end to audit: failed to find a log row with id equals to c63882ef-fed9-480a-88cd-1e18d4178646
	
	
	==> CRI-O <==
	Nov 24 03:11:34 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:34.644005713Z" level=info msg="Starting container: 1b304a21316c0ee8776b4e67836216aa8c4e9980c118f57c0ccac6e70ffab977" id=9205c014-1b4c-45d3-8c0b-b8bba7fd6fd8 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:11:34 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:34.645683742Z" level=info msg="Started container" PID=1844 containerID=1b304a21316c0ee8776b4e67836216aa8c4e9980c118f57c0ccac6e70ffab977 description=kube-system/coredns-66bc5c9577-w62hm/coredns id=9205c014-1b4c-45d3-8c0b-b8bba7fd6fd8 name=/runtime.v1.RuntimeService/StartContainer sandboxID=87b2abadded9d809c06ede5b92a19bb51b529202b852ce9b28caccea3409ee81
	Nov 24 03:11:37 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:37.490427683Z" level=info msg="Running pod sandbox: default/busybox/POD" id=57d9e9a3-dce3-4767-b859-8bd5d68b0564 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:37 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:37.490497196Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:37 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:37.495264919Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3c315eb8b704eb804e930c15eceb3d25ae163434ee5339042b92b9dfb562dc5d UID:3399a559-d753-41f3-86bb-203b96faca7f NetNS:/var/run/netns/d92acbd8-cf60-46cb-bc0f-ac5f9afb86e6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0007343b8}] Aliases:map[]}"
	Nov 24 03:11:37 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:37.49529987Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 03:11:37 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:37.512079354Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3c315eb8b704eb804e930c15eceb3d25ae163434ee5339042b92b9dfb562dc5d UID:3399a559-d753-41f3-86bb-203b96faca7f NetNS:/var/run/netns/d92acbd8-cf60-46cb-bc0f-ac5f9afb86e6 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0007343b8}] Aliases:map[]}"
	Nov 24 03:11:37 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:37.512205175Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 03:11:37 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:37.512968944Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 03:11:37 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:37.51374612Z" level=info msg="Ran pod sandbox 3c315eb8b704eb804e930c15eceb3d25ae163434ee5339042b92b9dfb562dc5d with infra container: default/busybox/POD" id=57d9e9a3-dce3-4767-b859-8bd5d68b0564 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:37 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:37.514948238Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=46a7c46c-8a64-411e-826d-afafea97248c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:37 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:37.515072044Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=46a7c46c-8a64-411e-826d-afafea97248c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:37 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:37.515104841Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=46a7c46c-8a64-411e-826d-afafea97248c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:37 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:37.515840543Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=477b01b3-2066-4349-91cd-bdfd4a1ba789 name=/runtime.v1.ImageService/PullImage
	Nov 24 03:11:37 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:37.517450884Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:11:38 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:38.132181533Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=477b01b3-2066-4349-91cd-bdfd4a1ba789 name=/runtime.v1.ImageService/PullImage
	Nov 24 03:11:38 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:38.133007797Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4b3c3aa8-c38e-4a29-a678-60d961f091d7 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:38 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:38.134328326Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ab6d1687-9549-4d71-b3a0-8370659f5958 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:38 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:38.137341999Z" level=info msg="Creating container: default/busybox/busybox" id=46bacb83-af11-4cc2-8f67-cbb363937a56 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:38 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:38.137467031Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:38 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:38.140648556Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:38 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:38.141080009Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:38 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:38.164135285Z" level=info msg="Created container 922162ef06f9fc22fee1d2d00e054271d153a75837c66b44cf90f4681acec4e7: default/busybox/busybox" id=46bacb83-af11-4cc2-8f67-cbb363937a56 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:38 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:38.164625352Z" level=info msg="Starting container: 922162ef06f9fc22fee1d2d00e054271d153a75837c66b44cf90f4681acec4e7" id=93da9f23-15c8-40be-80c3-c5522774fb9a name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:11:38 default-k8s-diff-port-993813 crio[768]: time="2025-11-24T03:11:38.166225715Z" level=info msg="Started container" PID=1927 containerID=922162ef06f9fc22fee1d2d00e054271d153a75837c66b44cf90f4681acec4e7 description=default/busybox/busybox id=93da9f23-15c8-40be-80c3-c5522774fb9a name=/runtime.v1.RuntimeService/StartContainer sandboxID=3c315eb8b704eb804e930c15eceb3d25ae163434ee5339042b92b9dfb562dc5d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	922162ef06f9f       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   3c315eb8b704e       busybox                                                default
	1b304a21316c0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 seconds ago      Running             coredns                   0                   87b2abadded9d       coredns-66bc5c9577-w62hm                               kube-system
	4943117161f48       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   3b83488d6c809       storage-provisioner                                    kube-system
	da37f5d37ea02       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      22 seconds ago      Running             kindnet-cni               0                   9d0d2107a8a56       kindnet-w6sh6                                          kube-system
	9225df88c27f9       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      22 seconds ago      Running             kube-proxy                0                   0ab47bb73ec67       kube-proxy-xgjzs                                       kube-system
	fef47459af67f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   9fb82523d2ad6       etcd-default-k8s-diff-port-993813                      kube-system
	0e61fcc08e9ad       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   dd72e94a25a45       kube-scheduler-default-k8s-diff-port-993813            kube-system
	679fa7e34b67a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   48bda0e078435       kube-controller-manager-default-k8s-diff-port-993813   kube-system
	ef4fe7ffb6ac4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   77c2ff418b979       kube-apiserver-default-k8s-diff-port-993813            kube-system
	
	
	==> coredns [1b304a21316c0ee8776b4e67836216aa8c4e9980c118f57c0ccac6e70ffab977] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47753 - 47901 "HINFO IN 3941286780381186563.202992069803996733. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.859940915s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-993813
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-993813
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=default-k8s-diff-port-993813
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_11_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:11:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-993813
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:11:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:11:34 +0000   Mon, 24 Nov 2025 03:11:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:11:34 +0000   Mon, 24 Nov 2025 03:11:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:11:34 +0000   Mon, 24 Nov 2025 03:11:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:11:34 +0000   Mon, 24 Nov 2025 03:11:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-993813
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                704691fb-a437-4d94-adeb-2d360c12ce3d
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-66bc5c9577-w62hm                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-default-k8s-diff-port-993813                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-w6sh6                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-993813             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-993813    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-xgjzs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-993813             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s (x8 over 34s)  kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s (x8 over 34s)  kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s (x8 over 34s)  kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s                kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s                kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s                kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s                node-controller  Node default-k8s-diff-port-993813 event: Registered Node default-k8s-diff-port-993813 in Controller
	  Normal  NodeReady                11s                kubelet          Node default-k8s-diff-port-993813 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [fef47459af67fc8a2eb5fbb74002464cb3d20efbdbf0ae4080b5450e43e6946c] <==
	{"level":"warn","ts":"2025-11-24T03:11:13.890818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:13.906842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48228","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:13.917842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:13.930233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:13.939807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:13.949442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:13.961331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:13.971210Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:13.989674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48362","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:13.995437Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.002204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.010852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.021578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.032604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.040578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.050397Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.056933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48506","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.063616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48526","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.071923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.079374Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.087388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.100384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.110520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.120027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48700","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.196269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48848","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:11:45 up  1:54,  0 user,  load average: 4.79, 4.02, 2.55
	Linux default-k8s-diff-port-993813 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [da37f5d37ea029c809e360172e03fb488fa8df9c9c1f6044e09b14461523d9fd] <==
	I1124 03:11:23.665252       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:11:23.665738       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 03:11:23.665992       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:11:23.666029       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:11:23.666073       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:11:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:11:23.915316       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:11:23.915658       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:11:23.915730       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:11:23.969287       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:11:24.316060       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:11:24.316086       1 metrics.go:72] Registering metrics
	I1124 03:11:24.316162       1 controller.go:711] "Syncing nftables rules"
	I1124 03:11:33.920164       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:11:33.920224       1 main.go:301] handling current node
	I1124 03:11:43.919286       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:11:43.919325       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ef4fe7ffb6ac4363ed929308bd8dca83ae57ac14c5d8801004653f902e2c58ac] <==
	I1124 03:11:14.841088       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1124 03:11:14.846633       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:11:14.846701       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:11:14.848276       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 03:11:14.854124       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:11:14.854222       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:11:15.013039       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:11:15.720205       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:11:15.723989       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:11:15.724008       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:11:16.164729       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:11:16.202132       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:11:16.322828       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:11:16.328662       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1124 03:11:16.329712       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:11:16.333823       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:11:16.782953       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:11:17.111543       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:11:17.119132       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:11:17.126630       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:11:21.935305       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:11:21.939014       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:11:22.434757       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:11:22.739206       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1124 03:11:44.265933       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:52942: use of closed network connection
	
	
	==> kube-controller-manager [679fa7e34b67a348f348d6757761f066c1927062a0db3e15256eb75e33b37abf] <==
	I1124 03:11:21.781492       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 03:11:21.781591       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 03:11:21.781735       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 03:11:21.782443       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 03:11:21.782465       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 03:11:21.782705       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 03:11:21.782814       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:11:21.782927       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:11:21.783186       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:11:21.783220       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 03:11:21.785549       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 03:11:21.785563       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 03:11:21.786741       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 03:11:21.786795       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 03:11:21.786833       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 03:11:21.786845       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 03:11:21.786852       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 03:11:21.789215       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:11:21.791377       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 03:11:21.793301       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:11:21.793569       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-993813" podCIDRs=["10.244.0.0/24"]
	I1124 03:11:21.799651       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:11:21.803967       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:11:21.806162       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:11:36.734176       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9225df88c27f9c26602cf02ce4bf194d544df1bfdadc4f27bc99199596e49326] <==
	I1124 03:11:23.504681       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:11:23.581623       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:11:23.683000       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:11:23.683045       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 03:11:23.683164       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:11:23.714644       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:11:23.714747       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:11:23.723056       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:11:23.723541       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:11:23.723611       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:11:23.726749       1 config.go:200] "Starting service config controller"
	I1124 03:11:23.726821       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:11:23.726919       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:11:23.726955       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:11:23.727881       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:11:23.727971       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:11:23.729730       1 config.go:309] "Starting node config controller"
	I1124 03:11:23.729792       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:11:23.729819       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:11:23.827033       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:11:23.827382       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:11:23.828092       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [0e61fcc08e9ada816782e10e29c1346361ef126c5295c9f031078558a7a1993d] <==
	E1124 03:11:14.798240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:11:14.798298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:11:14.798311       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:11:14.798319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:11:14.798339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:11:14.798384       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:11:14.798402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:11:14.798465       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:11:14.798478       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:11:14.798542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:11:14.798551       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:11:14.798570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:11:14.798755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:11:14.799340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:11:15.692668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:11:15.705701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:11:15.708597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:11:15.751688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:11:15.799581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:11:15.807590       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 03:11:15.809471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:11:15.828382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:11:15.830281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:11:15.848258       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I1124 03:11:18.192017       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:11:22 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:22.490194    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ff565cd3-e1be-4525-ab1f-465211f42f79-xtables-lock\") pod \"kindnet-w6sh6\" (UID: \"ff565cd3-e1be-4525-ab1f-465211f42f79\") " pod="kube-system/kindnet-w6sh6"
	Nov 24 03:11:22 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:22.490266    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmd7f\" (UniqueName: \"kubernetes.io/projected/ff565cd3-e1be-4525-ab1f-465211f42f79-kube-api-access-qmd7f\") pod \"kindnet-w6sh6\" (UID: \"ff565cd3-e1be-4525-ab1f-465211f42f79\") " pod="kube-system/kindnet-w6sh6"
	Nov 24 03:11:22 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:22.490298    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/82b10446-c8e9-4d11-aa15-ed7792a91865-kube-proxy\") pod \"kube-proxy-xgjzs\" (UID: \"82b10446-c8e9-4d11-aa15-ed7792a91865\") " pod="kube-system/kube-proxy-xgjzs"
	Nov 24 03:11:22 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:22.490326    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82b10446-c8e9-4d11-aa15-ed7792a91865-xtables-lock\") pod \"kube-proxy-xgjzs\" (UID: \"82b10446-c8e9-4d11-aa15-ed7792a91865\") " pod="kube-system/kube-proxy-xgjzs"
	Nov 24 03:11:22 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:22.490357    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82b10446-c8e9-4d11-aa15-ed7792a91865-lib-modules\") pod \"kube-proxy-xgjzs\" (UID: \"82b10446-c8e9-4d11-aa15-ed7792a91865\") " pod="kube-system/kube-proxy-xgjzs"
	Nov 24 03:11:22 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:22.490377    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ff565cd3-e1be-4525-ab1f-465211f42f79-lib-modules\") pod \"kindnet-w6sh6\" (UID: \"ff565cd3-e1be-4525-ab1f-465211f42f79\") " pod="kube-system/kindnet-w6sh6"
	Nov 24 03:11:22 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:22.490403    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fflfk\" (UniqueName: \"kubernetes.io/projected/82b10446-c8e9-4d11-aa15-ed7792a91865-kube-api-access-fflfk\") pod \"kube-proxy-xgjzs\" (UID: \"82b10446-c8e9-4d11-aa15-ed7792a91865\") " pod="kube-system/kube-proxy-xgjzs"
	Nov 24 03:11:22 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:22.490424    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ff565cd3-e1be-4525-ab1f-465211f42f79-cni-cfg\") pod \"kindnet-w6sh6\" (UID: \"ff565cd3-e1be-4525-ab1f-465211f42f79\") " pod="kube-system/kindnet-w6sh6"
	Nov 24 03:11:22 default-k8s-diff-port-993813 kubelet[1304]: E1124 03:11:22.597769    1304 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:11:22 default-k8s-diff-port-993813 kubelet[1304]: E1124 03:11:22.597816    1304 projected.go:196] Error preparing data for projected volume kube-api-access-qmd7f for pod kube-system/kindnet-w6sh6: configmap "kube-root-ca.crt" not found
	Nov 24 03:11:22 default-k8s-diff-port-993813 kubelet[1304]: E1124 03:11:22.597767    1304 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:11:22 default-k8s-diff-port-993813 kubelet[1304]: E1124 03:11:22.597932    1304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ff565cd3-e1be-4525-ab1f-465211f42f79-kube-api-access-qmd7f podName:ff565cd3-e1be-4525-ab1f-465211f42f79 nodeName:}" failed. No retries permitted until 2025-11-24 03:11:23.097870256 +0000 UTC m=+6.209797807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qmd7f" (UniqueName: "kubernetes.io/projected/ff565cd3-e1be-4525-ab1f-465211f42f79-kube-api-access-qmd7f") pod "kindnet-w6sh6" (UID: "ff565cd3-e1be-4525-ab1f-465211f42f79") : configmap "kube-root-ca.crt" not found
	Nov 24 03:11:22 default-k8s-diff-port-993813 kubelet[1304]: E1124 03:11:22.597951    1304 projected.go:196] Error preparing data for projected volume kube-api-access-fflfk for pod kube-system/kube-proxy-xgjzs: configmap "kube-root-ca.crt" not found
	Nov 24 03:11:22 default-k8s-diff-port-993813 kubelet[1304]: E1124 03:11:22.598038    1304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/82b10446-c8e9-4d11-aa15-ed7792a91865-kube-api-access-fflfk podName:82b10446-c8e9-4d11-aa15-ed7792a91865 nodeName:}" failed. No retries permitted until 2025-11-24 03:11:23.098013766 +0000 UTC m=+6.209941318 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fflfk" (UniqueName: "kubernetes.io/projected/82b10446-c8e9-4d11-aa15-ed7792a91865-kube-api-access-fflfk") pod "kube-proxy-xgjzs" (UID: "82b10446-c8e9-4d11-aa15-ed7792a91865") : configmap "kube-root-ca.crt" not found
	Nov 24 03:11:24 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:24.030765    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-w6sh6" podStartSLOduration=2.030744455 podStartE2EDuration="2.030744455s" podCreationTimestamp="2025-11-24 03:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:24.022515309 +0000 UTC m=+7.134442863" watchObservedRunningTime="2025-11-24 03:11:24.030744455 +0000 UTC m=+7.142672007"
	Nov 24 03:11:24 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:24.038196    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xgjzs" podStartSLOduration=2.038178743 podStartE2EDuration="2.038178743s" podCreationTimestamp="2025-11-24 03:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:24.038112983 +0000 UTC m=+7.150040534" watchObservedRunningTime="2025-11-24 03:11:24.038178743 +0000 UTC m=+7.150106295"
	Nov 24 03:11:34 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:34.266665    1304 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:11:34 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:34.376848    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/50428c8a-8e0e-48d0-ad32-38a93a976ba9-tmp\") pod \"storage-provisioner\" (UID: \"50428c8a-8e0e-48d0-ad32-38a93a976ba9\") " pod="kube-system/storage-provisioner"
	Nov 24 03:11:34 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:34.376910    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp8rd\" (UniqueName: \"kubernetes.io/projected/50428c8a-8e0e-48d0-ad32-38a93a976ba9-kube-api-access-vp8rd\") pod \"storage-provisioner\" (UID: \"50428c8a-8e0e-48d0-ad32-38a93a976ba9\") " pod="kube-system/storage-provisioner"
	Nov 24 03:11:34 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:34.376936    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4c6f1012-3439-464e-bf6a-4c175f98d54d-config-volume\") pod \"coredns-66bc5c9577-w62hm\" (UID: \"4c6f1012-3439-464e-bf6a-4c175f98d54d\") " pod="kube-system/coredns-66bc5c9577-w62hm"
	Nov 24 03:11:34 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:34.376957    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9tnp\" (UniqueName: \"kubernetes.io/projected/4c6f1012-3439-464e-bf6a-4c175f98d54d-kube-api-access-x9tnp\") pod \"coredns-66bc5c9577-w62hm\" (UID: \"4c6f1012-3439-464e-bf6a-4c175f98d54d\") " pod="kube-system/coredns-66bc5c9577-w62hm"
	Nov 24 03:11:35 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:35.048542    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-w62hm" podStartSLOduration=13.048521264 podStartE2EDuration="13.048521264s" podCreationTimestamp="2025-11-24 03:11:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:35.048119542 +0000 UTC m=+18.160047118" watchObservedRunningTime="2025-11-24 03:11:35.048521264 +0000 UTC m=+18.160448816"
	Nov 24 03:11:35 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:35.067560    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.067539284 podStartE2EDuration="12.067539284s" podCreationTimestamp="2025-11-24 03:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:35.067066511 +0000 UTC m=+18.178994084" watchObservedRunningTime="2025-11-24 03:11:35.067539284 +0000 UTC m=+18.179466837"
	Nov 24 03:11:37 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:37.293416    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsfsc\" (UniqueName: \"kubernetes.io/projected/3399a559-d753-41f3-86bb-203b96faca7f-kube-api-access-jsfsc\") pod \"busybox\" (UID: \"3399a559-d753-41f3-86bb-203b96faca7f\") " pod="default/busybox"
	Nov 24 03:11:39 default-k8s-diff-port-993813 kubelet[1304]: I1124 03:11:39.057767    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.4393124529999999 podStartE2EDuration="2.057746079s" podCreationTimestamp="2025-11-24 03:11:37 +0000 UTC" firstStartedPulling="2025-11-24 03:11:37.515366917 +0000 UTC m=+20.627294448" lastFinishedPulling="2025-11-24 03:11:38.133800537 +0000 UTC m=+21.245728074" observedRunningTime="2025-11-24 03:11:39.057693215 +0000 UTC m=+22.169620778" watchObservedRunningTime="2025-11-24 03:11:39.057746079 +0000 UTC m=+22.169673632"
	
	
	==> storage-provisioner [4943117161f4800c47dd5a7876067e2893a39435c912300ea7998fe020d1f408] <==
	I1124 03:11:34.646931       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:11:34.655432       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:11:34.655474       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:11:34.657495       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:34.662655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:11:34.662850       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:11:34.662955       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3f822d05-a76f-4ae4-9301-4b0cf90b6f0e", APIVersion:"v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-993813_bdf05b30-50b4-490c-995a-94a500d33fa8 became leader
	I1124 03:11:34.662987       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-993813_bdf05b30-50b4-490c-995a-94a500d33fa8!
	W1124 03:11:34.665508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:34.671126       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:11:34.763391       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-993813_bdf05b30-50b4-490c-995a-94a500d33fa8!
	W1124 03:11:36.674270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:36.678177       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:38.681007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:38.684613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:40.687215       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:40.690692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:42.693474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:42.697822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:44.701324       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:44.704969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993813 -n default-k8s-diff-port-993813
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-993813 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-603010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-603010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (262.913685ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:11:45Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-603010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-603010 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-603010 describe deploy/metrics-server -n kube-system: exit status 1 (79.093667ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-603010 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-603010
helpers_test.go:243: (dbg) docker inspect no-preload-603010:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845",
	        "Created": "2025-11-24T03:10:43.847831353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 632750,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:10:43.890047631Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845/hostname",
	        "HostsPath": "/var/lib/docker/containers/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845/hosts",
	        "LogPath": "/var/lib/docker/containers/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845-json.log",
	        "Name": "/no-preload-603010",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-603010:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-603010",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845",
	                "LowerDir": "/var/lib/docker/overlay2/883ce309da55410098919db1c8e27882341f46628e370779f1e1f648adf5829e-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/883ce309da55410098919db1c8e27882341f46628e370779f1e1f648adf5829e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/883ce309da55410098919db1c8e27882341f46628e370779f1e1f648adf5829e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/883ce309da55410098919db1c8e27882341f46628e370779f1e1f648adf5829e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-603010",
	                "Source": "/var/lib/docker/volumes/no-preload-603010/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-603010",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-603010",
	                "name.minikube.sigs.k8s.io": "no-preload-603010",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "7e3984b18a9299d3323bf3019826d4db670b9d8d7245f7ef59aaa75c3b685bcc",
	            "SandboxKey": "/var/run/docker/netns/7e3984b18a92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-603010": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6fb41680caede660e77e75cbbc4bea8a2931e68f7736aa43850d10472e9557bd",
	                    "EndpointID": "93032e7872a45b21d27a4fac27d9632a4b24077f6d266dd4f48fa6f0fa5dc02b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "12:5d:d5:ec:e2:85",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-603010",
	                        "6cf4d6c6dc34"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603010 -n no-preload-603010
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-603010 logs -n 25
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p flannel-965704 sudo systemctl cat docker --no-pager                                                                                                                                                                                        │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /etc/docker/daemon.json                                                                                                                                                                                            │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo docker system info                                                                                                                                                                                                     │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ start   │ -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:11 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                               │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                         │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cri-dockerd --version                                                                                                                                                                                                  │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo systemctl cat containerd --no-pager                                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                             │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /etc/containerd/config.toml                                                                                                                                                                                        │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo containerd config dump                                                                                                                                                                                                 │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo crio config                                                                                                                                                                                                            │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ delete  │ -p flannel-965704                                                                                                                                                                                                                             │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:11 UTC │
	│ addons  │ enable metrics-server -p newest-cni-438041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-579951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p newest-cni-438041 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p old-k8s-version-579951 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-603010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:10:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:10:57.127829  639611 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:10:57.127990  639611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:10:57.128000  639611 out.go:374] Setting ErrFile to fd 2...
	I1124 03:10:57.128004  639611 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:10:57.128242  639611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:10:57.128839  639611 out.go:368] Setting JSON to false
	I1124 03:10:57.129993  639611 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6804,"bootTime":1763947053,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:10:57.130043  639611 start.go:143] virtualization: kvm guest
	I1124 03:10:57.131842  639611 out.go:179] * [newest-cni-438041] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:10:57.133006  639611 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:10:57.133003  639611 notify.go:221] Checking for updates...
	I1124 03:10:57.135165  639611 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:10:57.136402  639611 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:10:57.137671  639611 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:10:57.138741  639611 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:10:57.139904  639611 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:10:57.141390  639611 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:10:57.141496  639611 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:10:57.141578  639611 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:10:57.141703  639611 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:10:57.166641  639611 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:10:57.166738  639611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:10:57.221961  639611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-24 03:10:57.211378242 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:10:57.222054  639611 docker.go:319] overlay module found
	I1124 03:10:57.223745  639611 out.go:179] * Using the docker driver based on user configuration
	I1124 03:10:57.224957  639611 start.go:309] selected driver: docker
	I1124 03:10:57.224977  639611 start.go:927] validating driver "docker" against <nil>
	I1124 03:10:57.224994  639611 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:10:57.225758  639611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:10:57.290865  639611 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:82 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-24 03:10:57.279924959 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:10:57.291115  639611 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1124 03:10:57.291161  639611 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1124 03:10:57.291452  639611 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 03:10:57.293881  639611 out.go:179] * Using Docker driver with root privileges
	I1124 03:10:57.295058  639611 cni.go:84] Creating CNI manager for ""
	I1124 03:10:57.295146  639611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:10:57.295161  639611 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:10:57.295265  639611 start.go:353] cluster config:
	{Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:10:57.296817  639611 out.go:179] * Starting "newest-cni-438041" primary control-plane node in "newest-cni-438041" cluster
	I1124 03:10:57.297866  639611 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:10:57.299907  639611 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:10:57.301070  639611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:10:57.301103  639611 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:10:57.301112  639611 cache.go:65] Caching tarball of preloaded images
	I1124 03:10:57.301177  639611 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:10:57.301210  639611 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:10:57.301222  639611 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:10:57.301343  639611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json ...
	I1124 03:10:57.301366  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json: {Name:mk1bf53574cdc9152c6531d50672e7a950b9d2f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:10:57.325407  639611 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:10:57.325433  639611 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:10:57.325454  639611 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:10:57.325494  639611 start.go:360] acquireMachinesLock for newest-cni-438041: {Name:mk895e89056f5ce7564002ba75457dcfde41ce4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:10:57.325596  639611 start.go:364] duration metric: took 82.202µs to acquireMachinesLock for "newest-cni-438041"
	I1124 03:10:57.325624  639611 start.go:93] Provisioning new machine with config: &{Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:10:57.325724  639611 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:10:55.541109  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (3.244075519s)
	I1124 03:10:55.541150  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1124 03:10:55.541172  631782 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 03:10:55.541227  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 03:10:56.794831  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.25357343s)
	I1124 03:10:56.794863  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 03:10:56.794908  631782 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 03:10:56.794989  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 03:10:55.833612  636397 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-993813:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (5.620337954s)
	I1124 03:10:55.833645  636397 kic.go:203] duration metric: took 5.620509753s to extract preloaded images to volume ...
	W1124 03:10:55.833730  636397 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:10:55.833774  636397 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:10:55.833824  636397 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:10:55.899529  636397 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-993813 --name default-k8s-diff-port-993813 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-993813 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-993813 --network default-k8s-diff-port-993813 --ip 192.168.76.2 --volume default-k8s-diff-port-993813:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:10:56.489655  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Running}}
	I1124 03:10:56.513036  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:10:56.535229  636397 cli_runner.go:164] Run: docker exec default-k8s-diff-port-993813 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:10:56.595848  636397 oci.go:144] the created container "default-k8s-diff-port-993813" has a running status.
	I1124 03:10:56.595922  636397 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa...
	I1124 03:10:56.701587  636397 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:10:56.875193  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:10:56.894915  636397 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:10:56.894937  636397 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-993813 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:10:56.946242  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:10:56.964911  636397 machine.go:94] provisionDockerMachine start ...
	I1124 03:10:56.965003  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:10:56.983380  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:10:56.983615  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:10:56.983627  636397 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:10:56.984346  636397 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37014->127.0.0.1:33468: read: connection reset by peer
	I1124 03:10:57.234863  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:57.734595  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:58.234694  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:58.734330  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:59.234707  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:59.735106  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:00.234710  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:00.735086  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:01.235238  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:01.735122  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:10:57.328166  639611 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:10:57.328471  639611 start.go:159] libmachine.API.Create for "newest-cni-438041" (driver="docker")
	I1124 03:10:57.328503  639611 client.go:173] LocalClient.Create starting
	I1124 03:10:57.328585  639611 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 03:10:57.328619  639611 main.go:143] libmachine: Decoding PEM data...
	I1124 03:10:57.328645  639611 main.go:143] libmachine: Parsing certificate...
	I1124 03:10:57.328730  639611 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 03:10:57.328758  639611 main.go:143] libmachine: Decoding PEM data...
	I1124 03:10:57.328776  639611 main.go:143] libmachine: Parsing certificate...
	I1124 03:10:57.329238  639611 cli_runner.go:164] Run: docker network inspect newest-cni-438041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:10:57.347161  639611 cli_runner.go:211] docker network inspect newest-cni-438041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:10:57.347240  639611 network_create.go:284] running [docker network inspect newest-cni-438041] to gather additional debugging logs...
	I1124 03:10:57.347259  639611 cli_runner.go:164] Run: docker network inspect newest-cni-438041
	W1124 03:10:57.366750  639611 cli_runner.go:211] docker network inspect newest-cni-438041 returned with exit code 1
	I1124 03:10:57.366777  639611 network_create.go:287] error running [docker network inspect newest-cni-438041]: docker network inspect newest-cni-438041: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-438041 not found
	I1124 03:10:57.366807  639611 network_create.go:289] output of [docker network inspect newest-cni-438041]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-438041 not found
	
	** /stderr **
	I1124 03:10:57.366976  639611 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:10:57.385293  639611 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-58688175ab6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:9d:6a:fc:5c:13} reservation:<nil>}
	I1124 03:10:57.386152  639611 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf033568456f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:35:42:16:23:28} reservation:<nil>}
	I1124 03:10:57.387409  639611 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ecb12099844 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ce:07:19:c6:91:6e} reservation:<nil>}
	I1124 03:10:57.388971  639611 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-50b2e4e61586 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:0d:88:19:7d:df} reservation:<nil>}
	I1124 03:10:57.389487  639611 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6fb41680caed IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:a0:6c:44:95:b2} reservation:<nil>}
	I1124 03:10:57.390236  639611 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018f44a0}
	I1124 03:10:57.390257  639611 network_create.go:124] attempt to create docker network newest-cni-438041 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 03:10:57.390305  639611 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-438041 newest-cni-438041
	I1124 03:10:57.440525  639611 network_create.go:108] docker network newest-cni-438041 192.168.94.0/24 created
	I1124 03:10:57.440568  639611 kic.go:121] calculated static IP "192.168.94.2" for the "newest-cni-438041" container
	I1124 03:10:57.440642  639611 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:10:57.458704  639611 cli_runner.go:164] Run: docker volume create newest-cni-438041 --label name.minikube.sigs.k8s.io=newest-cni-438041 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:10:57.476351  639611 oci.go:103] Successfully created a docker volume newest-cni-438041
	I1124 03:10:57.476450  639611 cli_runner.go:164] Run: docker run --rm --name newest-cni-438041-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-438041 --entrypoint /usr/bin/test -v newest-cni-438041:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:10:58.353729  639611 oci.go:107] Successfully prepared a docker volume newest-cni-438041
	I1124 03:10:58.353794  639611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:10:58.353806  639611 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:10:58.353903  639611 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-438041:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:10:58.184837  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.389817981s)
	I1124 03:10:58.184869  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1124 03:10:58.184909  631782 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:10:58.184953  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:11:00.135230  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:11:00.135263  636397 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-993813"
	I1124 03:11:00.135337  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.156666  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:00.157040  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:11:00.157061  636397 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993813 && echo "default-k8s-diff-port-993813" | sudo tee /etc/hostname
	I1124 03:11:00.317337  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:11:00.317424  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.338575  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:00.338824  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:11:00.338843  636397 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993813/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:11:00.487669  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:11:00.487698  636397 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:11:00.487736  636397 ubuntu.go:190] setting up certificates
	I1124 03:11:00.487751  636397 provision.go:84] configureAuth start
	I1124 03:11:00.487815  636397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:11:00.511564  636397 provision.go:143] copyHostCerts
	I1124 03:11:00.511630  636397 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:11:00.511666  636397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:11:00.511735  636397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:11:00.514009  636397 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:11:00.514030  636397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:11:00.514075  636397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:11:00.514159  636397 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:11:00.514167  636397 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:11:00.514200  636397 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:11:00.514270  636397 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993813 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-993813 localhost minikube]
	I1124 03:11:00.658058  636397 provision.go:177] copyRemoteCerts
	I1124 03:11:00.658133  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:11:00.658198  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.678015  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:00.787811  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:11:00.908237  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 03:11:00.926667  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:11:00.945146  636397 provision.go:87] duration metric: took 457.380171ms to configureAuth
	I1124 03:11:00.945175  636397 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:11:00.945368  636397 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:00.945497  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:00.963523  636397 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:00.963843  636397 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 03:11:00.963867  636397 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:11:01.528016  636397 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:11:01.528042  636397 machine.go:97] duration metric: took 4.563106275s to provisionDockerMachine
	I1124 03:11:01.528055  636397 client.go:176] duration metric: took 12.433514854s to LocalClient.Create
	I1124 03:11:01.528076  636397 start.go:167] duration metric: took 12.433610792s to libmachine.API.Create "default-k8s-diff-port-993813"
	I1124 03:11:01.528087  636397 start.go:293] postStartSetup for "default-k8s-diff-port-993813" (driver="docker")
	I1124 03:11:01.528107  636397 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:11:01.528192  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:11:01.528250  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:01.550426  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:01.725783  636397 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:11:01.731121  636397 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:11:01.731156  636397 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:11:01.731171  636397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:11:01.731245  636397 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:11:01.731344  636397 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:11:01.731461  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:11:01.741273  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:02.020513  636397 start.go:296] duration metric: took 492.40359ms for postStartSetup
	I1124 03:11:02.119944  636397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:11:02.137546  636397 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/config.json ...
	I1124 03:11:02.185355  636397 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:11:02.185405  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:02.201426  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:02.297393  636397 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:11:02.302398  636397 start.go:128] duration metric: took 13.210072434s to createHost
	I1124 03:11:02.302422  636397 start.go:83] releasing machines lock for "default-k8s-diff-port-993813", held for 13.210223546s
	I1124 03:11:02.302502  636397 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:11:02.319872  636397 ssh_runner.go:195] Run: cat /version.json
	I1124 03:11:02.319913  636397 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:11:02.319948  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:02.319995  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:02.340353  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:02.340353  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:02.486835  636397 ssh_runner.go:195] Run: systemctl --version
	I1124 03:11:02.493433  636397 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:11:02.533294  636397 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:11:02.538557  636397 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:11:02.538616  636397 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:11:02.908750  636397 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:11:02.908778  636397 start.go:496] detecting cgroup driver to use...
	I1124 03:11:02.908812  636397 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:11:02.908861  636397 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:11:02.925941  636397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:11:02.941046  636397 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:11:02.941102  636397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:11:02.959121  636397 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:11:02.975801  636397 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:11:03.054110  636397 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:11:03.174491  636397 docker.go:234] disabling docker service ...
	I1124 03:11:03.174560  636397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:11:03.193664  636397 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:11:03.207203  636397 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:11:03.340321  636397 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:11:03.515878  636397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:11:03.529161  636397 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:11:03.543103  636397 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:11:03.543166  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.604968  636397 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:11:03.605035  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.624611  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.645648  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.689119  636397 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:11:03.698440  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.783084  636397 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:02.234544  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:02.735113  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:03.234728  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:03.735125  623347 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:03.823251  623347 kubeadm.go:1114] duration metric: took 11.180431183s to wait for elevateKubeSystemPrivileges
	I1124 03:11:03.823284  623347 kubeadm.go:403] duration metric: took 22.234422884s to StartCluster
	I1124 03:11:03.823307  623347 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:03.823374  623347 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:03.824432  623347 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:03.824684  623347 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:03.824740  623347 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:03.824845  623347 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-579951"
	I1124 03:11:03.824727  623347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:03.824906  623347 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-579951"
	I1124 03:11:03.824917  623347 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:11:03.824923  623347 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-579951"
	I1124 03:11:03.824900  623347 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-579951"
	I1124 03:11:03.825024  623347 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:11:03.825377  623347 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:03.825590  623347 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:03.826953  623347 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:03.828395  623347 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:03.862253  623347 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-579951"
	I1124 03:11:03.862302  623347 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:11:03.862810  623347 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:03.864365  623347 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:03.807318  636397 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:03.820946  636397 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:11:03.839099  636397 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:11:03.853603  636397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:04.008696  636397 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:11:04.280958  636397 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:11:04.281140  636397 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:11:04.287138  636397 start.go:564] Will wait 60s for crictl version
	I1124 03:11:04.287195  636397 ssh_runner.go:195] Run: which crictl
	I1124 03:11:04.296400  636397 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:11:04.343627  636397 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:11:04.343993  636397 ssh_runner.go:195] Run: crio --version
	I1124 03:11:04.389849  636397 ssh_runner.go:195] Run: crio --version
	I1124 03:11:04.426944  636397 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:11:03.866933  623347 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:03.866992  623347 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:03.867050  623347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:03.908181  623347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:03.911219  623347 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:03.911443  623347 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:03.911619  623347 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:03.949048  623347 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:03.966864  623347 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:04.039230  623347 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:04.056821  623347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:04.079844  623347 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:04.252855  623347 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:04.253835  623347 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-579951" to be "Ready" ...
	I1124 03:11:04.604404  623347 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:04.605457  623347 addons.go:530] duration metric: took 780.71049ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:04.763969  623347 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-579951" context rescaled to 1 replicas
	W1124 03:11:06.257869  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	I1124 03:11:03.812979  639611 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-438041:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (5.459016714s)
	I1124 03:11:03.813017  639611 kic.go:203] duration metric: took 5.459207202s to extract preloaded images to volume ...
	W1124 03:11:03.813173  639611 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:11:03.813255  639611 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:11:03.813304  639611 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:11:03.930433  639611 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-438041 --name newest-cni-438041 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-438041 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-438041 --network newest-cni-438041 --ip 192.168.94.2 --volume newest-cni-438041:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:11:04.484106  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Running}}
	I1124 03:11:04.506492  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:04.527784  639611 cli_runner.go:164] Run: docker exec newest-cni-438041 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:11:04.586541  639611 oci.go:144] the created container "newest-cni-438041" has a running status.
	I1124 03:11:04.586577  639611 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa...
	I1124 03:11:04.720361  639611 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:11:04.758530  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:04.794751  639611 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:11:04.794778  639611 kic_runner.go:114] Args: [docker exec --privileged newest-cni-438041 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:11:04.848966  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:04.868444  639611 machine.go:94] provisionDockerMachine start ...
	I1124 03:11:04.868542  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:04.886704  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:04.887098  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:04.887115  639611 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:11:04.887825  639611 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60056->127.0.0.1:33473: read: connection reset by peer
	I1124 03:11:03.698009  631782 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (5.513031284s)
	I1124 03:11:03.698036  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 03:11:03.698072  631782 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:11:03.698135  631782 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:11:04.540749  631782 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-345525/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 03:11:04.540878  631782 cache_images.go:125] Successfully loaded all cached images
	I1124 03:11:04.540962  631782 cache_images.go:94] duration metric: took 16.632965714s to LoadCachedImages
	I1124 03:11:04.540998  631782 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 03:11:04.541478  631782 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-603010 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:04.541629  631782 ssh_runner.go:195] Run: crio config
	I1124 03:11:04.613074  631782 cni.go:84] Creating CNI manager for ""
	I1124 03:11:04.613101  631782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:04.613135  631782 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:11:04.613165  631782 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603010 NodeName:no-preload-603010 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:04.613332  631782 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603010"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:04.613410  631782 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:04.624805  631782 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 03:11:04.624880  631782 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:04.636504  631782 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1124 03:11:04.636570  631782 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1124 03:11:04.636598  631782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 03:11:04.637106  631782 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1124 03:11:04.641001  631782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 03:11:04.641031  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1124 03:11:05.924351  631782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:11:05.942273  631782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 03:11:05.947268  631782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 03:11:05.947299  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1124 03:11:06.319700  631782 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 03:11:06.328312  631782 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 03:11:06.328362  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1124 03:11:06.576699  631782 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:06.584640  631782 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:11:06.596881  631782 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:06.706372  631782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1124 03:11:06.725651  631782 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:06.731312  631782 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:06.856376  631782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:06.964324  631782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:06.983343  631782 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010 for IP: 192.168.85.2
	I1124 03:11:06.983368  631782 certs.go:195] generating shared ca certs ...
	I1124 03:11:06.983389  631782 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:06.983554  631782 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:06.983623  631782 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:06.983638  631782 certs.go:257] generating profile certs ...
	I1124 03:11:06.983713  631782 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key
	I1124 03:11:06.983731  631782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.crt with IP's: []
	I1124 03:11:07.236879  631782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.crt ...
	I1124 03:11:07.236911  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.crt: {Name:mk2d55635da2a9326437d41d4577da0fe14409fc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.237058  631782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key ...
	I1124 03:11:07.237070  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key: {Name:mkaa577d5c9ee92828884715bd0dda9017fc9779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.237153  631782 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738
	I1124 03:11:07.237166  631782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 03:11:07.327953  631782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738 ...
	I1124 03:11:07.327981  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738: {Name:mk8a9cae6d8e3a4cc6d6140e38080bb869e23acd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.328138  631782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738 ...
	I1124 03:11:07.328156  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738: {Name:mkbf13b81ddaf24f4938052522adb9836ef8e1d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.328261  631782 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt.df111738 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt
	I1124 03:11:07.328354  631782 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key
	I1124 03:11:07.328436  631782 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key
	I1124 03:11:07.328458  631782 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt with IP's: []
	I1124 03:11:07.358779  631782 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt ...
	I1124 03:11:07.358798  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt: {Name:mk394a0184e993e66f37c39d12264673ee1326c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.358929  631782 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key ...
	I1124 03:11:07.358944  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key: {Name:mkf0922c5b9c127348bd0d94fa6adc983ccc147a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:07.359146  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:07.359197  631782 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:07.359210  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:07.359245  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:07.359288  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:07.359324  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:07.359391  631782 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:07.360046  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:07.377802  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:07.394719  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:07.411226  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:07.427651  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:11:07.443818  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:11:07.461178  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:07.477210  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:11:07.493639  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:07.511874  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:07.528421  631782 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:07.544763  631782 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:07.557346  631782 ssh_runner.go:195] Run: openssl version
	I1124 03:11:07.563499  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:07.571402  631782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:07.574952  631782 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:07.575004  631782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:07.608612  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:07.616619  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:07.624657  631782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:07.628272  631782 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:07.628318  631782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:07.662522  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:07.670558  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:07.678360  631782 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:07.681796  631782 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:07.681850  631782 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:07.715936  631782 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:07.723734  631782 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:07.727008  631782 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:11:07.727066  631782 kubeadm.go:401] StartCluster: {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:07.727159  631782 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:07.727200  631782 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:07.757836  631782 cri.go:89] found id: ""
	I1124 03:11:07.757930  631782 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:07.767026  631782 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:11:07.775281  631782 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:11:07.775329  631782 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:11:07.782944  631782 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:11:07.782960  631782 kubeadm.go:158] found existing configuration files:
	
	I1124 03:11:07.782996  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:11:07.790173  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:11:07.790211  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:11:07.797407  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:11:07.804469  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:11:07.804513  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:11:07.811339  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:11:07.818449  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:11:07.818485  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:11:07.825301  631782 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:11:07.832368  631782 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:11:07.832409  631782 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:11:07.839105  631782 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:11:07.875134  631782 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:11:07.875186  631782 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:11:07.899771  631782 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:11:07.899860  631782 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:11:07.899936  631782 kubeadm.go:319] OS: Linux
	I1124 03:11:07.900023  631782 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:11:07.900109  631782 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:11:07.900181  631782 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:11:07.900246  631782 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:11:07.900310  631782 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:11:07.900374  631782 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:11:07.900436  631782 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:11:07.900489  631782 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:11:07.966533  631782 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:11:07.966689  631782 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:11:07.966849  631782 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:11:07.981358  631782 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:11:04.428062  636397 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:11:04.452862  636397 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:11:04.458281  636397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:04.471103  636397 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:11:04.471281  636397 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:11:04.471346  636397 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:04.523060  636397 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:04.523089  636397 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:11:04.523147  636397 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:04.562653  636397 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:04.562684  636397 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:11:04.562695  636397 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 03:11:04.562806  636397 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:04.562939  636397 ssh_runner.go:195] Run: crio config
	I1124 03:11:04.638357  636397 cni.go:84] Creating CNI manager for ""
	I1124 03:11:04.638382  636397 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:04.638402  636397 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:11:04.638430  636397 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993813 NodeName:default-k8s-diff-port-993813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:04.638602  636397 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993813"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:04.638670  636397 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:04.649639  636397 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:11:04.649707  636397 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:04.665638  636397 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 03:11:04.685753  636397 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:04.706728  636397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 03:11:04.727449  636397 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:04.732474  636397 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:04.750204  636397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:04.878850  636397 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:04.905254  636397 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813 for IP: 192.168.76.2
	I1124 03:11:04.905269  636397 certs.go:195] generating shared ca certs ...
	I1124 03:11:04.905285  636397 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:04.905416  636397 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:04.905456  636397 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:04.905465  636397 certs.go:257] generating profile certs ...
	I1124 03:11:04.905521  636397 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key
	I1124 03:11:04.905533  636397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.crt with IP's: []
	I1124 03:11:05.049206  636397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.crt ...
	I1124 03:11:05.049242  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.crt: {Name:mk818bd7c5f4a63b56241a5f5b815a5c96f8af6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.049427  636397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key ...
	I1124 03:11:05.049453  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key: {Name:mkb83de72d7be9aac5a3b6d7ffec3016949857c0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.049582  636397 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619
	I1124 03:11:05.049600  636397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 03:11:05.290005  636397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619 ...
	I1124 03:11:05.290086  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619: {Name:mkbe37296015109a5ee861e9a87e29d9440c243c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.290281  636397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619 ...
	I1124 03:11:05.290300  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619: {Name:mk596e1b3db31f58cc0b8eb40ec231f070ee1f14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.290403  636397 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt.200cd619 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt
	I1124 03:11:05.290503  636397 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key
	I1124 03:11:05.290584  636397 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key
	I1124 03:11:05.290607  636397 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt with IP's: []
	I1124 03:11:05.405376  636397 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt ...
	I1124 03:11:05.405411  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt: {Name:mk5c1d3bc48ab0dc1254aae88b7ec32711e77a8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.405578  636397 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key ...
	I1124 03:11:05.405599  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key: {Name:mk42df1886b091d28840c422e5e20c0f8c4e5569 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:05.405873  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:05.405948  636397 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:05.405959  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:05.406001  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:05.406031  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:05.406059  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:05.406113  636397 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:05.406989  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:05.434254  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:05.460107  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:05.485830  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:05.511902  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 03:11:05.535282  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:11:05.558610  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:05.579558  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:11:05.598340  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:05.620622  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:05.644303  636397 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:05.667291  636397 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:05.681732  636397 ssh_runner.go:195] Run: openssl version
	I1124 03:11:05.689816  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:05.701038  636397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:05.705646  636397 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:05.705699  636397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:05.763638  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:05.776210  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:05.789125  636397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:05.794258  636397 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:05.794315  636397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:05.853631  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:05.886140  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:05.898078  636397 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:05.902187  636397 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:05.902252  636397 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:06.009788  636397 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:06.034772  636397 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:06.040075  636397 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:11:06.040136  636397 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:06.040285  636397 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:06.040340  636397 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:06.076603  636397 cri.go:89] found id: ""
	I1124 03:11:06.076664  636397 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:06.084730  636397 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:11:06.096161  636397 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:11:06.096213  636397 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:11:06.104666  636397 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:11:06.104687  636397 kubeadm.go:158] found existing configuration files:
	
	I1124 03:11:06.104736  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1124 03:11:06.112142  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:11:06.112188  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:11:06.119278  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1124 03:11:06.126557  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:11:06.126604  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:11:06.133611  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1124 03:11:06.141319  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:11:06.141384  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:11:06.151450  636397 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1124 03:11:06.162299  636397 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:11:06.162489  636397 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:11:06.173268  636397 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:11:06.365493  636397 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:11:06.445191  636397 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:11:08.034430  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-438041
	
	I1124 03:11:08.034458  639611 ubuntu.go:182] provisioning hostname "newest-cni-438041"
	I1124 03:11:08.034525  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.053306  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:08.053556  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:08.053570  639611 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-438041 && echo "newest-cni-438041" | sudo tee /etc/hostname
	I1124 03:11:08.201604  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-438041
	
	I1124 03:11:08.201678  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.220581  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:08.220950  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:08.220977  639611 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-438041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-438041/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-438041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:11:08.358818  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:11:08.358853  639611 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:11:08.358877  639611 ubuntu.go:190] setting up certificates
	I1124 03:11:08.358902  639611 provision.go:84] configureAuth start
	I1124 03:11:08.358979  639611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:08.377513  639611 provision.go:143] copyHostCerts
	I1124 03:11:08.377573  639611 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:11:08.377584  639611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:11:08.377654  639611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:11:08.377742  639611 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:11:08.377752  639611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:11:08.377785  639611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:11:08.377851  639611 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:11:08.377860  639611 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:11:08.377905  639611 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:11:08.378033  639611 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.newest-cni-438041 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-438041]
	I1124 03:11:08.493906  639611 provision.go:177] copyRemoteCerts
	I1124 03:11:08.493995  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:11:08.494042  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.512353  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:08.611703  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:11:08.635092  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:11:08.653622  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:11:08.675705  639611 provision.go:87] duration metric: took 316.785216ms to configureAuth
	I1124 03:11:08.675736  639611 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:11:08.676005  639611 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:08.676156  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:08.697718  639611 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:08.698047  639611 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33473 <nil> <nil>}
	I1124 03:11:08.698069  639611 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:11:08.991292  639611 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:11:08.991321  639611 machine.go:97] duration metric: took 4.122852164s to provisionDockerMachine
	I1124 03:11:08.991334  639611 client.go:176] duration metric: took 11.662821141s to LocalClient.Create
	I1124 03:11:08.991367  639611 start.go:167] duration metric: took 11.662898329s to libmachine.API.Create "newest-cni-438041"
	I1124 03:11:08.991381  639611 start.go:293] postStartSetup for "newest-cni-438041" (driver="docker")
	I1124 03:11:08.991395  639611 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:11:08.991454  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:11:08.991515  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.009958  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.110159  639611 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:11:09.113555  639611 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:11:09.113584  639611 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:11:09.113597  639611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:11:09.113650  639611 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:11:09.113762  639611 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:11:09.113944  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:11:09.121410  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:09.140617  639611 start.go:296] duration metric: took 149.222262ms for postStartSetup
	I1124 03:11:09.141052  639611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:09.158606  639611 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json ...
	I1124 03:11:09.158846  639611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:11:09.158906  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.176052  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.271931  639611 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:11:09.276348  639611 start.go:128] duration metric: took 11.950609978s to createHost
	I1124 03:11:09.276376  639611 start.go:83] releasing machines lock for "newest-cni-438041", held for 11.950766604s
	I1124 03:11:09.276440  639611 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:09.294908  639611 ssh_runner.go:195] Run: cat /version.json
	I1124 03:11:09.294952  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.294957  639611 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:11:09.295031  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:09.313079  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.314881  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:09.408772  639611 ssh_runner.go:195] Run: systemctl --version
	I1124 03:11:09.469031  639611 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:11:09.504409  639611 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:11:09.508820  639611 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:11:09.508877  639611 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:11:09.533917  639611 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:11:09.533945  639611 start.go:496] detecting cgroup driver to use...
	I1124 03:11:09.533978  639611 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:11:09.534024  639611 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:11:09.550223  639611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:11:09.561378  639611 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:11:09.561431  639611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:11:09.576700  639611 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:11:09.592718  639611 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:11:09.686327  639611 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:11:09.778323  639611 docker.go:234] disabling docker service ...
	I1124 03:11:09.778388  639611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:11:09.797725  639611 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:11:09.809981  639611 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:11:09.897574  639611 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:11:09.981763  639611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:11:09.993604  639611 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:11:10.008039  639611 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:11:10.008088  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.017807  639611 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:11:10.017915  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.026036  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.034318  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.042375  639611 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:11:10.050115  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.058198  639611 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.071036  639611 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:10.079079  639611 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:11:10.085901  639611 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:11:10.092631  639611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:10.187290  639611 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:11:10.321446  639611 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:11:10.321516  639611 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:11:10.325320  639611 start.go:564] Will wait 60s for crictl version
	I1124 03:11:10.325377  639611 ssh_runner.go:195] Run: which crictl
	I1124 03:11:10.328940  639611 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:11:10.355782  639611 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:11:10.355854  639611 ssh_runner.go:195] Run: crio --version
	I1124 03:11:10.386668  639611 ssh_runner.go:195] Run: crio --version
	I1124 03:11:10.419997  639611 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:11:10.421239  639611 cli_runner.go:164] Run: docker network inspect newest-cni-438041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:11:10.440078  639611 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:11:10.443982  639611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:10.455537  639611 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 03:11:10.456654  639611 kubeadm.go:884] updating cluster {Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:11:10.456815  639611 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:11:10.456863  639611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:10.490472  639611 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:10.490492  639611 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:11:10.490540  639611 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:10.519699  639611 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:10.519720  639611 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:11:10.519729  639611 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:11:10.519828  639611 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-438041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:10.519912  639611 ssh_runner.go:195] Run: crio config
	I1124 03:11:10.565191  639611 cni.go:84] Creating CNI manager for ""
	I1124 03:11:10.565215  639611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:10.565239  639611 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 03:11:10.565270  639611 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-438041 NodeName:newest-cni-438041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:10.565418  639611 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-438041"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:10.565482  639611 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:10.573438  639611 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:11:10.573499  639611 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:10.581224  639611 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:11:10.593276  639611 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:10.607346  639611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1124 03:11:10.619134  639611 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:10.622475  639611 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:10.631680  639611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:10.724670  639611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:10.750283  639611 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041 for IP: 192.168.94.2
	I1124 03:11:10.750306  639611 certs.go:195] generating shared ca certs ...
	I1124 03:11:10.750339  639611 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:10.750511  639611 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:10.750555  639611 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:10.750565  639611 certs.go:257] generating profile certs ...
	I1124 03:11:10.750620  639611 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.key
	I1124 03:11:10.750633  639611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.crt with IP's: []
	I1124 03:11:10.920017  639611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.crt ...
	I1124 03:11:10.920047  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.crt: {Name:mkfd139af0a71cd4698b8ff5b3e638153eeb0dbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:10.920228  639611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.key ...
	I1124 03:11:10.920243  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.key: {Name:mke75272685634ebc2912579601c6ca7cb4478b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:10.920357  639611 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183
	I1124 03:11:10.920374  639611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 03:11:11.156793  639611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183 ...
	I1124 03:11:11.156820  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183: {Name:mke55e2e412acbf5b903a8d8b4a7d2880f9fbe7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.157004  639611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183 ...
	I1124 03:11:11.157022  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183: {Name:mkad44470d73de35f2d3ae6d5e6d61417cfe11c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.157103  639611 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt.52539183 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt
	I1124 03:11:11.157202  639611 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key
	I1124 03:11:11.157264  639611 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key
	I1124 03:11:11.157285  639611 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt with IP's: []
	I1124 03:11:11.183331  639611 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt ...
	I1124 03:11:11.183357  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt: {Name:mkaf061d70fce7922fd95db6d82ac8186d66239f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.183478  639611 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key ...
	I1124 03:11:11.183490  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key: {Name:mk44940b01cb7f629207bffeb036b8a7e5d40814 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:11.183656  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:11.183693  639611 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:11.183702  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:11.183724  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:11.183746  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:11.183768  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:11.183810  639611 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:11.184490  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:11.202414  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:11.218915  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:11.235233  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:11.251127  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:11:11.267814  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:11:11.284563  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:11.300790  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:11:11.316788  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:11.334413  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:11.350424  639611 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:11.366533  639611 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:11.378365  639611 ssh_runner.go:195] Run: openssl version
	I1124 03:11:11.384126  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:11.391937  639611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:11.395429  639611 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:11.395475  639611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:11.428268  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:11.435958  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:11.443551  639611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:11.446861  639611 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:11.446917  639611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:11.480561  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:11.488521  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:11.496317  639611 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:11.499903  639611 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:11.500486  639611 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:11.534970  639611 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:11.542760  639611 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:11.546025  639611 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:11:11.546084  639611 kubeadm.go:401] StartCluster: {Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:11.546189  639611 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:11.546235  639611 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:11.573079  639611 cri.go:89] found id: ""
	I1124 03:11:11.573143  639611 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:11.580989  639611 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:11:11.588193  639611 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:11:11.588243  639611 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:11:11.595578  639611 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:11:11.595596  639611 kubeadm.go:158] found existing configuration files:
	
	I1124 03:11:11.595632  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:11:11.602806  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:11:11.602846  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:11:11.609710  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:11:11.617281  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:11:11.617327  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:11:11.624606  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:11:11.631999  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:11:11.632041  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:11:11.640350  639611 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:11:11.648359  639611 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:11:11.648402  639611 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:11:11.656826  639611 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:11:11.705613  639611 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:11:11.705684  639611 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:11:11.726192  639611 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:11:11.726285  639611 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:11:11.726340  639611 kubeadm.go:319] OS: Linux
	I1124 03:11:11.726397  639611 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:11:11.726461  639611 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:11:11.726524  639611 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:11:11.726587  639611 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:11:11.726686  639611 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:11:11.726790  639611 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:11:11.726861  639611 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:11:11.726943  639611 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:11:11.786505  639611 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:11:11.786613  639611 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:11:11.786747  639611 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:11:11.794629  639611 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1124 03:11:08.757098  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	W1124 03:11:10.757264  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	I1124 03:11:11.798699  639611 out.go:252]   - Generating certificates and keys ...
	I1124 03:11:11.798797  639611 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:11:11.798912  639611 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:11:11.963263  639611 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:11:12.107595  639611 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:11:07.983375  631782 out.go:252]   - Generating certificates and keys ...
	I1124 03:11:07.983499  631782 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:11:07.983606  631782 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:11:09.010428  631782 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:11:09.257194  631782 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:11:09.494535  631782 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:11:09.716956  631782 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:11:09.775865  631782 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:11:09.776099  631782 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-603010] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:11:10.030969  631782 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:11:10.031162  631782 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-603010] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:11:10.290289  631782 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:11:10.445776  631782 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:11:10.719700  631782 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:11:10.719788  631782 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:11:10.954056  631782 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:11:11.224490  631782 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:11:11.470938  631782 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:11:11.927378  631782 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:11:12.303932  631782 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:11:12.304513  631782 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:11:12.307975  631782 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:11:12.309284  631782 out.go:252]   - Booting up control plane ...
	I1124 03:11:12.309381  631782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:11:12.309465  631782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:11:12.310009  631782 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:11:12.339837  631782 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:11:12.340003  631782 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:11:12.347388  631782 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:11:12.347620  631782 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:11:12.347698  631782 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:11:12.466844  631782 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:11:12.466970  631782 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:11:12.233009  639611 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:11:12.451335  639611 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:11:12.593355  639611 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:11:12.593574  639611 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-438041] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:11:13.275810  639611 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:11:13.276017  639611 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-438041] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:11:14.145354  639611 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:11:14.614138  639611 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:11:14.941086  639611 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:11:14.941227  639611 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:11:15.058919  639611 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:11:15.267378  639611 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:11:15.939232  639611 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:11:16.257592  639611 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:11:16.635822  639611 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:11:16.636485  639611 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:11:16.640110  639611 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	W1124 03:11:13.256972  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	W1124 03:11:15.259252  623347 node_ready.go:57] node "old-k8s-version-579951" has "Ready":"False" status (will retry)
	I1124 03:11:12.968700  631782 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.726277ms
	I1124 03:11:12.972359  631782 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:11:12.972498  631782 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 03:11:12.972634  631782 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:11:12.972778  631782 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:11:15.168823  631782 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.194903045s
	I1124 03:11:15.395212  631782 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.422782586s
	I1124 03:11:16.974533  631782 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002117874s
	I1124 03:11:16.990327  631782 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:11:17.001157  631782 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:11:17.009558  631782 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:11:17.009832  631782 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-603010 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:11:17.017079  631782 kubeadm.go:319] [bootstrap-token] Using token: qixyjy.v1lkfw8d9c2mcnrf
	I1124 03:11:16.641561  639611 out.go:252]   - Booting up control plane ...
	I1124 03:11:16.641675  639611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:11:16.641789  639611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:11:16.642679  639611 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:11:16.660968  639611 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:11:16.661101  639611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:11:16.668686  639611 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:11:16.669004  639611 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:11:16.669064  639611 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:11:16.793748  639611 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:11:16.793925  639611 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:11:17.712301  636397 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:11:17.712380  636397 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:11:17.712515  636397 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:11:17.712609  636397 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:11:17.712667  636397 kubeadm.go:319] OS: Linux
	I1124 03:11:17.712717  636397 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:11:17.712772  636397 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:11:17.712846  636397 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:11:17.712998  636397 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:11:17.713081  636397 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:11:17.713158  636397 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:11:17.713228  636397 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:11:17.713298  636397 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:11:17.713410  636397 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:11:17.713559  636397 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:11:17.713706  636397 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:11:17.713767  636397 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:11:17.715195  636397 out.go:252]   - Generating certificates and keys ...
	I1124 03:11:17.715298  636397 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:11:17.715442  636397 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:11:17.715523  636397 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:11:17.715597  636397 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:11:17.715657  636397 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:11:17.715733  636397 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:11:17.715822  636397 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:11:17.716053  636397 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-993813 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:11:17.716134  636397 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:11:17.716334  636397 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-993813 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:11:17.716443  636397 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:11:17.716537  636397 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:11:17.716600  636397 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:11:17.716682  636397 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:11:17.716772  636397 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:11:17.716823  636397 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:11:17.716938  636397 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:11:17.717053  636397 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:11:17.717141  636397 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:11:17.717221  636397 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:11:17.717295  636397 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:11:17.718959  636397 out.go:252]   - Booting up control plane ...
	I1124 03:11:17.719049  636397 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:11:17.719135  636397 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:11:17.719219  636397 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:11:17.719341  636397 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:11:17.719462  636397 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:11:17.719560  636397 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:11:17.719632  636397 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:11:17.719681  636397 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:11:17.719830  636397 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:11:17.719976  636397 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:11:17.720049  636397 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501467711s
	I1124 03:11:17.720160  636397 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:11:17.720268  636397 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1124 03:11:17.720406  636397 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:11:17.720513  636397 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:11:17.720614  636397 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.599087563s
	I1124 03:11:17.720742  636397 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.501028525s
	I1124 03:11:17.720844  636397 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.00179766s
	I1124 03:11:17.721018  636397 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:11:17.721192  636397 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:11:17.721298  636397 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:11:17.721558  636397 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-993813 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:11:17.721622  636397 kubeadm.go:319] [bootstrap-token] Using token: q5wdgj.p9bwnkl5amhf01kb
	I1124 03:11:17.722776  636397 out.go:252]   - Configuring RBAC rules ...
	I1124 03:11:17.722949  636397 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:11:17.723089  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:11:17.723273  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:11:17.723470  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:11:17.723636  636397 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:11:17.723759  636397 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:11:17.723924  636397 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:11:17.723997  636397 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:11:17.724057  636397 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:11:17.724062  636397 kubeadm.go:319] 
	I1124 03:11:17.724140  636397 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:11:17.724145  636397 kubeadm.go:319] 
	I1124 03:11:17.724249  636397 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:11:17.724254  636397 kubeadm.go:319] 
	I1124 03:11:17.724288  636397 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:11:17.724365  636397 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:11:17.724429  636397 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:11:17.724434  636397 kubeadm.go:319] 
	I1124 03:11:17.724504  636397 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:11:17.724509  636397 kubeadm.go:319] 
	I1124 03:11:17.724570  636397 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:11:17.724576  636397 kubeadm.go:319] 
	I1124 03:11:17.724642  636397 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:11:17.724751  636397 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:11:17.724845  636397 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:11:17.724850  636397 kubeadm.go:319] 
	I1124 03:11:17.724962  636397 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:11:17.725053  636397 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:11:17.725058  636397 kubeadm.go:319] 
	I1124 03:11:17.725156  636397 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token q5wdgj.p9bwnkl5amhf01kb \
	I1124 03:11:17.725281  636397 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:11:17.725306  636397 kubeadm.go:319] 	--control-plane 
	I1124 03:11:17.725311  636397 kubeadm.go:319] 
	I1124 03:11:17.725412  636397 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:11:17.725417  636397 kubeadm.go:319] 
	I1124 03:11:17.725515  636397 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token q5wdgj.p9bwnkl5amhf01kb \
	I1124 03:11:17.725654  636397 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:11:17.725664  636397 cni.go:84] Creating CNI manager for ""
	I1124 03:11:17.725672  636397 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:17.727357  636397 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:11:17.018572  631782 out.go:252]   - Configuring RBAC rules ...
	I1124 03:11:17.018732  631782 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:11:17.021245  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:11:17.025919  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:11:17.028242  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:11:17.030590  631782 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:11:17.032723  631782 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:11:17.380197  631782 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:11:17.802727  631782 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:11:18.381075  631782 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:11:18.382320  631782 kubeadm.go:319] 
	I1124 03:11:18.382408  631782 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:11:18.382416  631782 kubeadm.go:319] 
	I1124 03:11:18.382508  631782 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:11:18.382522  631782 kubeadm.go:319] 
	I1124 03:11:18.382554  631782 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:11:18.382630  631782 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:11:18.382704  631782 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:11:18.382712  631782 kubeadm.go:319] 
	I1124 03:11:18.382781  631782 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:11:18.382791  631782 kubeadm.go:319] 
	I1124 03:11:18.382850  631782 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:11:18.382859  631782 kubeadm.go:319] 
	I1124 03:11:18.382948  631782 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:11:18.383059  631782 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:11:18.383153  631782 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:11:18.383164  631782 kubeadm.go:319] 
	I1124 03:11:18.383265  631782 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:11:18.383360  631782 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:11:18.383370  631782 kubeadm.go:319] 
	I1124 03:11:18.383510  631782 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token qixyjy.v1lkfw8d9c2mcnrf \
	I1124 03:11:18.383708  631782 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:11:18.383747  631782 kubeadm.go:319] 	--control-plane 
	I1124 03:11:18.383767  631782 kubeadm.go:319] 
	I1124 03:11:18.383880  631782 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:11:18.383909  631782 kubeadm.go:319] 
	I1124 03:11:18.384037  631782 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token qixyjy.v1lkfw8d9c2mcnrf \
	I1124 03:11:18.384180  631782 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:11:18.387182  631782 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:11:18.387348  631782 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:11:18.387386  631782 cni.go:84] Creating CNI manager for ""
	I1124 03:11:18.387399  631782 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:18.389706  631782 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:11:17.729080  636397 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:11:17.735280  636397 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:11:17.735299  636397 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:11:17.750224  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:11:17.964488  636397 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:11:17.964571  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:17.964583  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-993813 minikube.k8s.io/updated_at=2025_11_24T03_11_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=default-k8s-diff-port-993813 minikube.k8s.io/primary=true
	I1124 03:11:17.977541  636397 ops.go:34] apiserver oom_adj: -16
	I1124 03:11:18.089531  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:18.589931  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:17.757544  623347 node_ready.go:49] node "old-k8s-version-579951" is "Ready"
	I1124 03:11:17.757568  623347 node_ready.go:38] duration metric: took 13.503706583s for node "old-k8s-version-579951" to be "Ready" ...
	I1124 03:11:17.757591  623347 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:17.757632  623347 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:11:17.769351  623347 api_server.go:72] duration metric: took 13.944624755s to wait for apiserver process to appear ...
	I1124 03:11:17.769381  623347 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:17.769404  623347 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 03:11:17.773486  623347 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 03:11:17.774606  623347 api_server.go:141] control plane version: v1.28.0
	I1124 03:11:17.774639  623347 api_server.go:131] duration metric: took 5.249615ms to wait for apiserver health ...
	I1124 03:11:17.774650  623347 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:17.778732  623347 system_pods.go:59] 8 kube-system pods found
	I1124 03:11:17.778769  623347 system_pods.go:61] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:17.778779  623347 system_pods.go:61] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:17.778787  623347 system_pods.go:61] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:17.778792  623347 system_pods.go:61] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:17.778797  623347 system_pods.go:61] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:17.778806  623347 system_pods.go:61] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:17.778810  623347 system_pods.go:61] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:17.778817  623347 system_pods.go:61] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:17.778824  623347 system_pods.go:74] duration metric: took 4.167214ms to wait for pod list to return data ...
	I1124 03:11:17.778835  623347 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:17.781411  623347 default_sa.go:45] found service account: "default"
	I1124 03:11:17.781435  623347 default_sa.go:55] duration metric: took 2.594162ms for default service account to be created ...
	I1124 03:11:17.781446  623347 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:11:17.784981  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:17.785018  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:17.785031  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:17.785044  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:17.785050  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:17.785061  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:17.785066  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:17.785076  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:17.785090  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:17.785127  623347 retry.go:31] will retry after 271.484184ms: missing components: kube-dns
	I1124 03:11:18.065194  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:18.065237  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:18.065248  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:18.065257  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:18.065263  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:18.065269  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:18.065274  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:18.065279  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:18.065287  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:18.065306  623347 retry.go:31] will retry after 388.018904ms: missing components: kube-dns
	I1124 03:11:18.457864  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:18.457936  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:18.457946  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:18.457961  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:18.457972  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:18.457978  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:18.457984  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:18.457991  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:18.457999  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:18.458022  623347 retry.go:31] will retry after 449.601826ms: missing components: kube-dns
	I1124 03:11:18.911831  623347 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:18.911859  623347 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Running
	I1124 03:11:18.911865  623347 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running
	I1124 03:11:18.911869  623347 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running
	I1124 03:11:18.911873  623347 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running
	I1124 03:11:18.911877  623347 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running
	I1124 03:11:18.911880  623347 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running
	I1124 03:11:18.911916  623347 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running
	I1124 03:11:18.911921  623347 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Running
	I1124 03:11:18.911931  623347 system_pods.go:126] duration metric: took 1.130477915s to wait for k8s-apps to be running ...
	I1124 03:11:18.911944  623347 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:11:18.911996  623347 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:11:18.925774  623347 system_svc.go:56] duration metric: took 13.819357ms WaitForService to wait for kubelet
	I1124 03:11:18.925804  623347 kubeadm.go:587] duration metric: took 15.101081639s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:11:18.925827  623347 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:18.928599  623347 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:18.928633  623347 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:18.928652  623347 node_conditions.go:105] duration metric: took 2.818338ms to run NodePressure ...
	I1124 03:11:18.928667  623347 start.go:242] waiting for startup goroutines ...
	I1124 03:11:18.928681  623347 start.go:247] waiting for cluster config update ...
	I1124 03:11:18.928701  623347 start.go:256] writing updated cluster config ...
	I1124 03:11:18.929049  623347 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:18.933285  623347 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:18.937686  623347 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.946299  623347 pod_ready.go:94] pod "coredns-5dd5756b68-5nwx9" is "Ready"
	I1124 03:11:18.946320  623347 pod_ready.go:86] duration metric: took 8.611977ms for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.950801  623347 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.960988  623347 pod_ready.go:94] pod "etcd-old-k8s-version-579951" is "Ready"
	I1124 03:11:18.961015  623347 pod_ready.go:86] duration metric: took 10.19455ms for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.965881  623347 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.974882  623347 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-579951" is "Ready"
	I1124 03:11:18.974933  623347 pod_ready.go:86] duration metric: took 9.016779ms for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:18.977770  623347 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:19.341020  623347 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-579951" is "Ready"
	I1124 03:11:19.341052  623347 pod_ready.go:86] duration metric: took 363.250058ms for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:19.538869  623347 pod_ready.go:83] waiting for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:19.937877  623347 pod_ready.go:94] pod "kube-proxy-r82jh" is "Ready"
	I1124 03:11:19.937925  623347 pod_ready.go:86] duration metric: took 399.001292ms for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:20.140275  623347 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:20.537761  623347 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-579951" is "Ready"
	I1124 03:11:20.537795  623347 pod_ready.go:86] duration metric: took 397.491187ms for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:20.537812  623347 pod_ready.go:40] duration metric: took 1.604492738s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:20.582109  623347 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 03:11:20.583699  623347 out.go:203] 
	W1124 03:11:20.584752  623347 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 03:11:20.585796  623347 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:11:20.587217  623347 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-579951" cluster and "default" namespace by default
	I1124 03:11:17.795245  639611 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001564938s
	I1124 03:11:17.799260  639611 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:11:17.799423  639611 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1124 03:11:17.799562  639611 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:11:17.799651  639611 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:11:20.070827  639611 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.271449475s
	I1124 03:11:20.290602  639611 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.491348646s
	I1124 03:11:21.801475  639611 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002149825s
	I1124 03:11:21.812595  639611 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:11:21.822553  639611 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:11:21.831169  639611 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:11:21.831446  639611 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-438041 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:11:21.841628  639611 kubeadm.go:319] [bootstrap-token] Using token: yx8fea.c13myzzt6w383nef
	I1124 03:11:21.842995  639611 out.go:252]   - Configuring RBAC rules ...
	I1124 03:11:21.843145  639611 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:11:21.846076  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:11:21.851007  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:11:21.853367  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:11:21.856222  639611 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:11:21.859271  639611 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:11:19.090574  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:19.589602  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.090576  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.590533  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.089866  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.589593  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.089582  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.590222  636397 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.673854  636397 kubeadm.go:1114] duration metric: took 4.709348594s to wait for elevateKubeSystemPrivileges
	I1124 03:11:22.673908  636397 kubeadm.go:403] duration metric: took 16.63377865s to StartCluster
	I1124 03:11:22.673934  636397 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:22.674008  636397 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:22.675076  636397 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:22.675302  636397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:22.675326  636397 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:22.675390  636397 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-993813"
	I1124 03:11:22.675304  636397 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:22.675418  636397 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-993813"
	I1124 03:11:22.675431  636397 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993813"
	I1124 03:11:22.675411  636397 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-993813"
	I1124 03:11:22.675530  636397 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:11:22.675536  636397 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:22.675814  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:11:22.676034  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:11:22.676852  636397 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:22.678754  636397 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:22.703150  636397 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-993813"
	I1124 03:11:22.703198  636397 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:11:22.703676  636397 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:11:22.704736  636397 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:18.390820  631782 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:11:18.395615  631782 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:11:18.395633  631782 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:11:18.409234  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:11:18.710608  631782 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:11:18.710754  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:18.710853  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-603010 minikube.k8s.io/updated_at=2025_11_24T03_11_18_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=no-preload-603010 minikube.k8s.io/primary=true
	I1124 03:11:18.818373  631782 ops.go:34] apiserver oom_adj: -16
	I1124 03:11:18.818465  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:19.318531  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:19.819135  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.319402  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:20.819441  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.319189  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:21.818604  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.319077  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:22.706096  636397 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:22.706117  636397 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:22.706176  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:22.737283  636397 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:22.737304  636397 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:22.737370  636397 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:11:22.740863  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:22.761473  636397 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:11:22.778645  636397 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:22.830555  636397 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:22.862561  636397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:22.876089  636397 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:22.963053  636397 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:22.964307  636397 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:11:23.185636  636397 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:22.209953  639611 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:11:22.623609  639611 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:11:23.207075  639611 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:11:23.208086  639611 kubeadm.go:319] 
	I1124 03:11:23.208184  639611 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:11:23.208202  639611 kubeadm.go:319] 
	I1124 03:11:23.208296  639611 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:11:23.208304  639611 kubeadm.go:319] 
	I1124 03:11:23.208344  639611 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:11:23.208443  639611 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:11:23.208509  639611 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:11:23.208519  639611 kubeadm.go:319] 
	I1124 03:11:23.208591  639611 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:11:23.208601  639611 kubeadm.go:319] 
	I1124 03:11:23.208661  639611 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:11:23.208671  639611 kubeadm.go:319] 
	I1124 03:11:23.208771  639611 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:11:23.208934  639611 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:11:23.209014  639611 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:11:23.209021  639611 kubeadm.go:319] 
	I1124 03:11:23.209090  639611 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:11:23.209153  639611 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:11:23.209159  639611 kubeadm.go:319] 
	I1124 03:11:23.209225  639611 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token yx8fea.c13myzzt6w383nef \
	I1124 03:11:23.209329  639611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:11:23.209368  639611 kubeadm.go:319] 	--control-plane 
	I1124 03:11:23.209382  639611 kubeadm.go:319] 
	I1124 03:11:23.209513  639611 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:11:23.209523  639611 kubeadm.go:319] 
	I1124 03:11:23.209667  639611 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token yx8fea.c13myzzt6w383nef \
	I1124 03:11:23.209795  639611 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:11:23.212372  639611 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:11:23.212472  639611 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:11:23.212489  639611 cni.go:84] Creating CNI manager for ""
	I1124 03:11:23.212498  639611 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:23.213669  639611 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:11:22.819290  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.318726  631782 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.413238  631782 kubeadm.go:1114] duration metric: took 4.702498844s to wait for elevateKubeSystemPrivileges
	I1124 03:11:23.413274  631782 kubeadm.go:403] duration metric: took 15.686211393s to StartCluster
	I1124 03:11:23.413298  631782 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:23.413374  631782 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:23.415097  631782 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:23.415455  631782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:23.415991  631782 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:23.416200  631782 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:23.416393  631782 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:23.416478  631782 addons.go:70] Setting storage-provisioner=true in profile "no-preload-603010"
	I1124 03:11:23.416515  631782 addons.go:239] Setting addon storage-provisioner=true in "no-preload-603010"
	I1124 03:11:23.416545  631782 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:11:23.416771  631782 addons.go:70] Setting default-storageclass=true in profile "no-preload-603010"
	I1124 03:11:23.416794  631782 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603010"
	I1124 03:11:23.417522  631782 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:11:23.418922  631782 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:11:23.420690  631782 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:23.422440  631782 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:23.453170  631782 addons.go:239] Setting addon default-storageclass=true in "no-preload-603010"
	I1124 03:11:23.453315  631782 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:11:23.454249  631782 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:11:23.456721  631782 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:23.187200  636397 addons.go:530] duration metric: took 511.871879ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:23.468811  636397 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-993813" context rescaled to 1 replicas
	I1124 03:11:23.457832  631782 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:23.457852  631782 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:23.457945  631782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:11:23.485040  631782 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:23.485073  631782 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:23.485135  631782 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:11:23.488649  631782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:11:23.522776  631782 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:11:23.578154  631782 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:23.637057  631782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:23.642323  631782 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:23.675165  631782 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:23.795763  631782 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:23.982706  631782 node_ready.go:35] waiting up to 6m0s for node "no-preload-603010" to be "Ready" ...
	I1124 03:11:23.988365  631782 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:23.214606  639611 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:11:23.218969  639611 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:11:23.219002  639611 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:11:23.233030  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:11:23.530587  639611 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:11:23.530753  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.530907  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-438041 minikube.k8s.io/updated_at=2025_11_24T03_11_23_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=newest-cni-438041 minikube.k8s.io/primary=true
	I1124 03:11:23.553306  639611 ops.go:34] apiserver oom_adj: -16
	I1124 03:11:23.638819  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:24.139560  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:24.639641  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:25.139273  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:25.638941  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:26.139461  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:26.638988  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:23.989407  631782 addons.go:530] duration metric: took 573.023057ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:24.300916  631782 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-603010" context rescaled to 1 replicas
	W1124 03:11:25.985432  631782 node_ready.go:57] node "no-preload-603010" has "Ready":"False" status (will retry)
	I1124 03:11:27.139734  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:27.639015  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:28.139551  639611 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:11:28.207738  639611 kubeadm.go:1114] duration metric: took 4.677029552s to wait for elevateKubeSystemPrivileges
	I1124 03:11:28.207780  639611 kubeadm.go:403] duration metric: took 16.661698302s to StartCluster
	I1124 03:11:28.207804  639611 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:28.207878  639611 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:28.209479  639611 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:28.209719  639611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:11:28.209737  639611 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:28.209814  639611 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:28.209929  639611 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-438041"
	I1124 03:11:28.209946  639611 addons.go:70] Setting default-storageclass=true in profile "newest-cni-438041"
	I1124 03:11:28.209971  639611 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-438041"
	I1124 03:11:28.209980  639611 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-438041"
	I1124 03:11:28.210010  639611 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:11:28.210056  639611 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:28.210387  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:28.210537  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:28.211106  639611 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:28.212323  639611 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:28.233230  639611 addons.go:239] Setting addon default-storageclass=true in "newest-cni-438041"
	I1124 03:11:28.233278  639611 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:11:28.233850  639611 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:28.234771  639611 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:28.235819  639611 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:28.235861  639611 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:28.235962  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:28.261133  639611 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:28.261156  639611 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:28.261334  639611 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:28.267999  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:28.289398  639611 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:28.299784  639611 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:11:28.359817  639611 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:28.384919  639611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:28.404504  639611 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:28.491961  639611 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 03:11:28.493110  639611 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:28.493157  639611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1124 03:11:28.510848  639611 kapi.go:211] failed rescaling "coredns" deployment in "kube-system" namespace and "newest-cni-438041" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E1124 03:11:28.510875  639611 start.go:161] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I1124 03:11:28.701114  639611 api_server.go:72] duration metric: took 491.340672ms to wait for apiserver process to appear ...
	I1124 03:11:28.701143  639611 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:28.701166  639611 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:11:28.705994  639611 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:11:28.706754  639611 api_server.go:141] control plane version: v1.34.1
	I1124 03:11:28.706781  639611 api_server.go:131] duration metric: took 5.630796ms to wait for apiserver health ...
	I1124 03:11:28.706793  639611 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:28.709054  639611 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:11:28.709369  639611 system_pods.go:59] 9 kube-system pods found
	I1124 03:11:28.709395  639611 system_pods.go:61] "coredns-66bc5c9577-b5rlp" [ec3ad010-7694-4640-9638-fe6f5c97f56a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:28.709402  639611 system_pods.go:61] "coredns-66bc5c9577-mwvq8" [c8831e7f-34c0-40c7-a728-7f7882ed604a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:28.709411  639611 system_pods.go:61] "etcd-newest-cni-438041" [7acbb753-dfd2-4438-b370-a7e38c4fbc5d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:11:28.709418  639611 system_pods.go:61] "kindnet-xp46p" [19fa7668-24bd-454c-a5df-37534a06d3a5] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:11:28.709423  639611 system_pods.go:61] "kube-apiserver-newest-cni-438041" [c7d90375-f6c0-4a1f-8b80-81574119b191] Running
	I1124 03:11:28.709432  639611 system_pods.go:61] "kube-controller-manager-newest-cni-438041" [54b144f6-6f26-4e9b-818b-cbb2d7b4c0a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:11:28.709437  639611 system_pods.go:61] "kube-proxy-n85pg" [86f875e2-7efc-4b60-b031-a1de71ea7502] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:11:28.709447  639611 system_pods.go:61] "kube-scheduler-newest-cni-438041" [75e99a3a-d4a9-4428-a52a-ef5ac4edc76c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:11:28.709457  639611 system_pods.go:61] "storage-provisioner" [9a94c2f7-e288-4528-b22c-f413d79bdf46] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:28.709467  639611 system_pods.go:74] duration metric: took 2.667768ms to wait for pod list to return data ...
	I1124 03:11:28.709481  639611 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:28.710153  639611 addons.go:530] duration metric: took 500.34824ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:11:28.711298  639611 default_sa.go:45] found service account: "default"
	I1124 03:11:28.711317  639611 default_sa.go:55] duration metric: took 1.826862ms for default service account to be created ...
	I1124 03:11:28.711328  639611 kubeadm.go:587] duration metric: took 501.561139ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 03:11:28.711341  639611 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:28.713171  639611 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:28.713192  639611 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:28.713206  639611 node_conditions.go:105] duration metric: took 1.86027ms to run NodePressure ...
	I1124 03:11:28.713217  639611 start.go:242] waiting for startup goroutines ...
	I1124 03:11:28.713224  639611 start.go:247] waiting for cluster config update ...
	I1124 03:11:28.713233  639611 start.go:256] writing updated cluster config ...
	I1124 03:11:28.713443  639611 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:28.759550  639611 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:11:28.760722  639611 out.go:179] * Done! kubectl is now configured to use "newest-cni-438041" cluster and "default" namespace by default
	W1124 03:11:24.968153  636397 node_ready.go:57] node "default-k8s-diff-port-993813" has "Ready":"False" status (will retry)
	W1124 03:11:27.467212  636397 node_ready.go:57] node "default-k8s-diff-port-993813" has "Ready":"False" status (will retry)
	W1124 03:11:27.985481  631782 node_ready.go:57] node "no-preload-603010" has "Ready":"False" status (will retry)
	W1124 03:11:29.986348  631782 node_ready.go:57] node "no-preload-603010" has "Ready":"False" status (will retry)
	W1124 03:11:32.485831  631782 node_ready.go:57] node "no-preload-603010" has "Ready":"False" status (will retry)
	W1124 03:11:29.468262  636397 node_ready.go:57] node "default-k8s-diff-port-993813" has "Ready":"False" status (will retry)
	W1124 03:11:31.967715  636397 node_ready.go:57] node "default-k8s-diff-port-993813" has "Ready":"False" status (will retry)
	I1124 03:11:34.466994  636397 node_ready.go:49] node "default-k8s-diff-port-993813" is "Ready"
	I1124 03:11:34.467022  636397 node_ready.go:38] duration metric: took 11.502672577s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:11:34.467035  636397 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:34.467110  636397 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:11:34.478740  636397 api_server.go:72] duration metric: took 11.803297035s to wait for apiserver process to appear ...
	I1124 03:11:34.478765  636397 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:34.478786  636397 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:11:34.483466  636397 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1124 03:11:34.484430  636397 api_server.go:141] control plane version: v1.34.1
	I1124 03:11:34.484461  636397 api_server.go:131] duration metric: took 5.687933ms to wait for apiserver health ...
	I1124 03:11:34.484472  636397 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:34.487442  636397 system_pods.go:59] 8 kube-system pods found
	I1124 03:11:34.487468  636397 system_pods.go:61] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:34.487474  636397 system_pods.go:61] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running
	I1124 03:11:34.487480  636397 system_pods.go:61] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running
	I1124 03:11:34.487484  636397 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running
	I1124 03:11:34.487488  636397 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running
	I1124 03:11:34.487492  636397 system_pods.go:61] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running
	I1124 03:11:34.487495  636397 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running
	I1124 03:11:34.487504  636397 system_pods.go:61] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:34.487510  636397 system_pods.go:74] duration metric: took 3.032367ms to wait for pod list to return data ...
	I1124 03:11:34.487519  636397 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:34.489541  636397 default_sa.go:45] found service account: "default"
	I1124 03:11:34.489559  636397 default_sa.go:55] duration metric: took 2.034688ms for default service account to be created ...
	I1124 03:11:34.489572  636397 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:11:34.492558  636397 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:34.492600  636397 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:34.492617  636397 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running
	I1124 03:11:34.492626  636397 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running
	I1124 03:11:34.492632  636397 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running
	I1124 03:11:34.492642  636397 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running
	I1124 03:11:34.492652  636397 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running
	I1124 03:11:34.492658  636397 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running
	I1124 03:11:34.492665  636397 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:34.492694  636397 retry.go:31] will retry after 200.05639ms: missing components: kube-dns
	I1124 03:11:34.696030  636397 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:34.696063  636397 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:34.696069  636397 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running
	I1124 03:11:34.696077  636397 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running
	I1124 03:11:34.696080  636397 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running
	I1124 03:11:34.696083  636397 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running
	I1124 03:11:34.696087  636397 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running
	I1124 03:11:34.696090  636397 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running
	I1124 03:11:34.696095  636397 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:34.696110  636397 retry.go:31] will retry after 280.398371ms: missing components: kube-dns
	I1124 03:11:34.980332  636397 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:34.980374  636397 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:34.980383  636397 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running
	I1124 03:11:34.980392  636397 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running
	I1124 03:11:34.980397  636397 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running
	I1124 03:11:34.980403  636397 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running
	I1124 03:11:34.980408  636397 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running
	I1124 03:11:34.980414  636397 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running
	I1124 03:11:34.980422  636397 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:34.980448  636397 retry.go:31] will retry after 395.954624ms: missing components: kube-dns
	I1124 03:11:35.380496  636397 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:35.380531  636397 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running
	I1124 03:11:35.380539  636397 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running
	I1124 03:11:35.380546  636397 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running
	I1124 03:11:35.380552  636397 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running
	I1124 03:11:35.380558  636397 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running
	I1124 03:11:35.380562  636397 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running
	I1124 03:11:35.380567  636397 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running
	I1124 03:11:35.380572  636397 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running
	I1124 03:11:35.380583  636397 system_pods.go:126] duration metric: took 891.004931ms to wait for k8s-apps to be running ...
	I1124 03:11:35.380597  636397 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:11:35.380649  636397 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:11:35.393441  636397 system_svc.go:56] duration metric: took 12.837679ms WaitForService to wait for kubelet
	I1124 03:11:35.393466  636397 kubeadm.go:587] duration metric: took 12.71802981s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:11:35.393481  636397 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:35.395811  636397 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:35.395841  636397 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:35.395858  636397 node_conditions.go:105] duration metric: took 2.372506ms to run NodePressure ...
	I1124 03:11:35.395872  636397 start.go:242] waiting for startup goroutines ...
	I1124 03:11:35.395895  636397 start.go:247] waiting for cluster config update ...
	I1124 03:11:35.395910  636397 start.go:256] writing updated cluster config ...
	I1124 03:11:35.396213  636397 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:35.399585  636397 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:35.402632  636397 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:35.406208  636397 pod_ready.go:94] pod "coredns-66bc5c9577-w62hm" is "Ready"
	I1124 03:11:35.406238  636397 pod_ready.go:86] duration metric: took 3.573858ms for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:35.407851  636397 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:35.411135  636397 pod_ready.go:94] pod "etcd-default-k8s-diff-port-993813" is "Ready"
	I1124 03:11:35.411153  636397 pod_ready.go:86] duration metric: took 3.282775ms for pod "etcd-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:35.412766  636397 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:35.416462  636397 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-993813" is "Ready"
	I1124 03:11:35.416483  636397 pod_ready.go:86] duration metric: took 3.700448ms for pod "kube-apiserver-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:35.418174  636397 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:35.803236  636397 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-993813" is "Ready"
	I1124 03:11:35.803266  636397 pod_ready.go:86] duration metric: took 385.06776ms for pod "kube-controller-manager-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.003469  636397 pod_ready.go:83] waiting for pod "kube-proxy-xgjzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.402595  636397 pod_ready.go:94] pod "kube-proxy-xgjzs" is "Ready"
	I1124 03:11:36.402619  636397 pod_ready.go:86] duration metric: took 399.12563ms for pod "kube-proxy-xgjzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.604639  636397 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:37.003065  636397 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-993813" is "Ready"
	I1124 03:11:37.003089  636397 pod_ready.go:86] duration metric: took 398.428767ms for pod "kube-scheduler-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:37.003101  636397 pod_ready.go:40] duration metric: took 1.603482207s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:37.046307  636397 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:11:37.047979  636397 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-993813" cluster and "default" namespace by default
	W1124 03:11:34.486500  631782 node_ready.go:57] node "no-preload-603010" has "Ready":"False" status (will retry)
	I1124 03:11:36.485217  631782 node_ready.go:49] node "no-preload-603010" is "Ready"
	I1124 03:11:36.485247  631782 node_ready.go:38] duration metric: took 12.502511597s for node "no-preload-603010" to be "Ready" ...
	I1124 03:11:36.485264  631782 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:36.485315  631782 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:11:36.496780  631782 api_server.go:72] duration metric: took 13.080544347s to wait for apiserver process to appear ...
	I1124 03:11:36.496802  631782 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:36.496819  631782 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:11:36.500722  631782 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:11:36.501639  631782 api_server.go:141] control plane version: v1.34.1
	I1124 03:11:36.501668  631782 api_server.go:131] duration metric: took 4.859943ms to wait for apiserver health ...
	I1124 03:11:36.501676  631782 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:36.504265  631782 system_pods.go:59] 8 kube-system pods found
	I1124 03:11:36.504312  631782 system_pods.go:61] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:36.504327  631782 system_pods.go:61] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running
	I1124 03:11:36.504340  631782 system_pods.go:61] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running
	I1124 03:11:36.504349  631782 system_pods.go:61] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running
	I1124 03:11:36.504357  631782 system_pods.go:61] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running
	I1124 03:11:36.504365  631782 system_pods.go:61] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running
	I1124 03:11:36.504371  631782 system_pods.go:61] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running
	I1124 03:11:36.504383  631782 system_pods.go:61] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:36.504394  631782 system_pods.go:74] duration metric: took 2.710904ms to wait for pod list to return data ...
	I1124 03:11:36.504406  631782 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:36.506380  631782 default_sa.go:45] found service account: "default"
	I1124 03:11:36.506397  631782 default_sa.go:55] duration metric: took 1.983667ms for default service account to be created ...
	I1124 03:11:36.506407  631782 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:11:36.508530  631782 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:36.508552  631782 system_pods.go:89] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:36.508557  631782 system_pods.go:89] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running
	I1124 03:11:36.508563  631782 system_pods.go:89] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running
	I1124 03:11:36.508567  631782 system_pods.go:89] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running
	I1124 03:11:36.508570  631782 system_pods.go:89] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running
	I1124 03:11:36.508574  631782 system_pods.go:89] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running
	I1124 03:11:36.508577  631782 system_pods.go:89] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running
	I1124 03:11:36.508583  631782 system_pods.go:89] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:36.508608  631782 retry.go:31] will retry after 237.617737ms: missing components: kube-dns
	I1124 03:11:36.749857  631782 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:36.749896  631782 system_pods.go:89] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running
	I1124 03:11:36.749902  631782 system_pods.go:89] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running
	I1124 03:11:36.749906  631782 system_pods.go:89] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running
	I1124 03:11:36.749909  631782 system_pods.go:89] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running
	I1124 03:11:36.749913  631782 system_pods.go:89] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running
	I1124 03:11:36.749916  631782 system_pods.go:89] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running
	I1124 03:11:36.749919  631782 system_pods.go:89] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running
	I1124 03:11:36.749922  631782 system_pods.go:89] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running
	I1124 03:11:36.749930  631782 system_pods.go:126] duration metric: took 243.517252ms to wait for k8s-apps to be running ...
	I1124 03:11:36.749940  631782 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:11:36.749989  631782 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:11:36.763009  631782 system_svc.go:56] duration metric: took 13.057269ms WaitForService to wait for kubelet
	I1124 03:11:36.763038  631782 kubeadm.go:587] duration metric: took 13.346804489s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:11:36.763061  631782 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:36.765293  631782 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:36.765318  631782 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:36.765332  631782 node_conditions.go:105] duration metric: took 2.266082ms to run NodePressure ...
	I1124 03:11:36.765346  631782 start.go:242] waiting for startup goroutines ...
	I1124 03:11:36.765353  631782 start.go:247] waiting for cluster config update ...
	I1124 03:11:36.765363  631782 start.go:256] writing updated cluster config ...
	I1124 03:11:36.765588  631782 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:36.769133  631782 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:36.772057  631782 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.775398  631782 pod_ready.go:94] pod "coredns-66bc5c9577-9n5xf" is "Ready"
	I1124 03:11:36.775416  631782 pod_ready.go:86] duration metric: took 3.34099ms for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.777223  631782 pod_ready.go:83] waiting for pod "etcd-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.780532  631782 pod_ready.go:94] pod "etcd-no-preload-603010" is "Ready"
	I1124 03:11:36.780549  631782 pod_ready.go:86] duration metric: took 3.305626ms for pod "etcd-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.782225  631782 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.785532  631782 pod_ready.go:94] pod "kube-apiserver-no-preload-603010" is "Ready"
	I1124 03:11:36.785548  631782 pod_ready.go:86] duration metric: took 3.304015ms for pod "kube-apiserver-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:36.787228  631782 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:37.173629  631782 pod_ready.go:94] pod "kube-controller-manager-no-preload-603010" is "Ready"
	I1124 03:11:37.173655  631782 pod_ready.go:86] duration metric: took 386.410612ms for pod "kube-controller-manager-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:37.374053  631782 pod_ready.go:83] waiting for pod "kube-proxy-swj6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:37.772793  631782 pod_ready.go:94] pod "kube-proxy-swj6c" is "Ready"
	I1124 03:11:37.772822  631782 pod_ready.go:86] duration metric: took 398.744991ms for pod "kube-proxy-swj6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:37.972562  631782 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:38.373638  631782 pod_ready.go:94] pod "kube-scheduler-no-preload-603010" is "Ready"
	I1124 03:11:38.373665  631782 pod_ready.go:86] duration metric: took 401.078498ms for pod "kube-scheduler-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:11:38.373676  631782 pod_ready.go:40] duration metric: took 1.604514204s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:38.416726  631782 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:11:38.418110  631782 out.go:179] * Done! kubectl is now configured to use "no-preload-603010" cluster and "default" namespace by default
	W1124 03:11:38.424014  631782 root.go:91] failed to log command end to audit: failed to find a log row with id equals to c63882ef-fed9-480a-88cd-1e18d4178646
	
	
	==> CRI-O <==
	Nov 24 03:11:36 no-preload-603010 crio[764]: time="2025-11-24T03:11:36.605373362Z" level=info msg="Starting container: 1ccec57b7938627dc1ae6826efe35c86a15ac06f22523e63697e34803913f421" id=fe8c62e7-3ffd-4504-b54b-229da7de0745 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:11:36 no-preload-603010 crio[764]: time="2025-11-24T03:11:36.607038215Z" level=info msg="Started container" PID=2852 containerID=1ccec57b7938627dc1ae6826efe35c86a15ac06f22523e63697e34803913f421 description=kube-system/coredns-66bc5c9577-9n5xf/coredns id=fe8c62e7-3ffd-4504-b54b-229da7de0745 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f99dda36f4bbe51a5ad0dce5a19b80d9294d7240db643ddd8e1f7d62e6cb9d0e
	Nov 24 03:11:38 no-preload-603010 crio[764]: time="2025-11-24T03:11:38.886835234Z" level=info msg="Running pod sandbox: default/busybox/POD" id=a627000b-1184-4013-a338-d282ebbd1e0c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:38 no-preload-603010 crio[764]: time="2025-11-24T03:11:38.886927587Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:38 no-preload-603010 crio[764]: time="2025-11-24T03:11:38.891665798Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:425fe801efe89d9d389bc459700150c5bbe7b0e9d3f8463cdc6f615e23498f0c UID:0d4cbf8f-cfc7-4e80-badf-b1b840617547 NetNS:/var/run/netns/c19dee02-7fdd-442d-8c18-b66b68781d31 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000363268}] Aliases:map[]}"
	Nov 24 03:11:38 no-preload-603010 crio[764]: time="2025-11-24T03:11:38.891693846Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 03:11:38 no-preload-603010 crio[764]: time="2025-11-24T03:11:38.9062912Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:425fe801efe89d9d389bc459700150c5bbe7b0e9d3f8463cdc6f615e23498f0c UID:0d4cbf8f-cfc7-4e80-badf-b1b840617547 NetNS:/var/run/netns/c19dee02-7fdd-442d-8c18-b66b68781d31 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000363268}] Aliases:map[]}"
	Nov 24 03:11:38 no-preload-603010 crio[764]: time="2025-11-24T03:11:38.906405517Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 03:11:38 no-preload-603010 crio[764]: time="2025-11-24T03:11:38.907071029Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 03:11:38 no-preload-603010 crio[764]: time="2025-11-24T03:11:38.907807797Z" level=info msg="Ran pod sandbox 425fe801efe89d9d389bc459700150c5bbe7b0e9d3f8463cdc6f615e23498f0c with infra container: default/busybox/POD" id=a627000b-1184-4013-a338-d282ebbd1e0c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:38 no-preload-603010 crio[764]: time="2025-11-24T03:11:38.908838543Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e570bc00-d83f-48f4-93e0-ab63b9be6673 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:38 no-preload-603010 crio[764]: time="2025-11-24T03:11:38.908981819Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e570bc00-d83f-48f4-93e0-ab63b9be6673 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:38 no-preload-603010 crio[764]: time="2025-11-24T03:11:38.909027021Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e570bc00-d83f-48f4-93e0-ab63b9be6673 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:38 no-preload-603010 crio[764]: time="2025-11-24T03:11:38.90955218Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6e7e4198-23bd-4e57-9f43-e2154b866df3 name=/runtime.v1.ImageService/PullImage
	Nov 24 03:11:38 no-preload-603010 crio[764]: time="2025-11-24T03:11:38.910934792Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:11:39 no-preload-603010 crio[764]: time="2025-11-24T03:11:39.524112523Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=6e7e4198-23bd-4e57-9f43-e2154b866df3 name=/runtime.v1.ImageService/PullImage
	Nov 24 03:11:39 no-preload-603010 crio[764]: time="2025-11-24T03:11:39.524702777Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=96c355af-a618-4555-b3dd-2c243a639050 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:39 no-preload-603010 crio[764]: time="2025-11-24T03:11:39.525926758Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0d607e71-4e36-4352-8463-7ed78a550e48 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:39 no-preload-603010 crio[764]: time="2025-11-24T03:11:39.528868794Z" level=info msg="Creating container: default/busybox/busybox" id=60b6bcd2-c3bf-4eb7-9ae5-12ceb4b545e8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:39 no-preload-603010 crio[764]: time="2025-11-24T03:11:39.529008115Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:39 no-preload-603010 crio[764]: time="2025-11-24T03:11:39.532964229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:39 no-preload-603010 crio[764]: time="2025-11-24T03:11:39.533356704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:39 no-preload-603010 crio[764]: time="2025-11-24T03:11:39.556869235Z" level=info msg="Created container 77b5ffb34771796173956e316313d6ed44d7a5a9af8554c3dd18f021c88551e0: default/busybox/busybox" id=60b6bcd2-c3bf-4eb7-9ae5-12ceb4b545e8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:39 no-preload-603010 crio[764]: time="2025-11-24T03:11:39.557411668Z" level=info msg="Starting container: 77b5ffb34771796173956e316313d6ed44d7a5a9af8554c3dd18f021c88551e0" id=3a4352c6-c8ec-4b5e-89f2-5d9a1516db69 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:11:39 no-preload-603010 crio[764]: time="2025-11-24T03:11:39.558958512Z" level=info msg="Started container" PID=2934 containerID=77b5ffb34771796173956e316313d6ed44d7a5a9af8554c3dd18f021c88551e0 description=default/busybox/busybox id=3a4352c6-c8ec-4b5e-89f2-5d9a1516db69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=425fe801efe89d9d389bc459700150c5bbe7b0e9d3f8463cdc6f615e23498f0c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	77b5ffb347717       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   425fe801efe89       busybox                                     default
	1ccec57b79386       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 seconds ago      Running             coredns                   0                   f99dda36f4bbe       coredns-66bc5c9577-9n5xf                    kube-system
	233348f0f774f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   6236796f6c665       storage-provisioner                         kube-system
	18fc8721924ea       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    21 seconds ago      Running             kindnet-cni               0                   b26b6a9349b2e       kindnet-7gvgm                               kube-system
	cfa2d2b369958       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   1223a949d9fb9       kube-proxy-swj6c                            kube-system
	2d604a8ca9df3       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   9509f6b630887       etcd-no-preload-603010                      kube-system
	02ef92461ea10       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   746309639ba6c       kube-controller-manager-no-preload-603010   kube-system
	859988fb4c72c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   b0e11cb5530a3       kube-apiserver-no-preload-603010            kube-system
	efc2dc9f8b942       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   f0e16f910161c       kube-scheduler-no-preload-603010            kube-system
	
	
	==> coredns [1ccec57b7938627dc1ae6826efe35c86a15ac06f22523e63697e34803913f421] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44005 - 25233 "HINFO IN 1629102948850783150.5575158550916779690. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.968579069s
	
	
	==> describe nodes <==
	Name:               no-preload-603010
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-603010
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=no-preload-603010
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_11_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:11:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-603010
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:11:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:11:38 +0000   Mon, 24 Nov 2025 03:11:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:11:38 +0000   Mon, 24 Nov 2025 03:11:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:11:38 +0000   Mon, 24 Nov 2025 03:11:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:11:38 +0000   Mon, 24 Nov 2025 03:11:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-603010
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                1b59d48b-7e38-42b7-9a74-cd736c856d5f
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-9n5xf                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-no-preload-603010                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-7gvgm                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-no-preload-603010             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-no-preload-603010    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-swj6c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-no-preload-603010             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node no-preload-603010 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node no-preload-603010 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node no-preload-603010 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node no-preload-603010 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node no-preload-603010 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node no-preload-603010 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node no-preload-603010 event: Registered Node no-preload-603010 in Controller
	  Normal  NodeReady                11s                kubelet          Node no-preload-603010 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [2d604a8ca9df366fa8e203cfe5664947ff95119c931cc2d50c84dc57cd22929b] <==
	{"level":"warn","ts":"2025-11-24T03:11:14.485621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.496000Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.505388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.512729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.522642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.530156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.539255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.547842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.558948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.580055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.587286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.600133Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.616028Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.634101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.650699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.669047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.677854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.686439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.693860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.703882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.712450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.730808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.745568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.763473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:14.835095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49854","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:11:47 up  1:54,  0 user,  load average: 4.79, 4.02, 2.55
	Linux no-preload-603010 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [18fc8721924eac1d8b1b69fb384faeb7a3142607e748da89074296d9d9437dfb] <==
	I1124 03:11:25.665462       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:11:25.665745       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 03:11:25.665907       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:11:25.665927       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:11:25.665963       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:11:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:11:25.868948       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:11:25.869315       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:11:25.869615       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:11:25.869721       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:11:26.261377       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:11:26.261406       1 metrics.go:72] Registering metrics
	I1124 03:11:26.261462       1 controller.go:711] "Syncing nftables rules"
	I1124 03:11:35.872469       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:11:35.872525       1 main.go:301] handling current node
	I1124 03:11:45.873996       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:11:45.874032       1 main.go:301] handling current node
	
	
	==> kube-apiserver [859988fb4c72c7dc73d38b404048ff2e989aa11af65c64f99f3c7f0784309e97] <==
	I1124 03:11:15.456371       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:11:15.460696       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:11:15.461142       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:11:15.471024       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:11:15.471116       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:11:15.471163       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:11:15.650764       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:11:16.263596       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:11:16.267199       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:11:16.267217       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:11:16.690647       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:11:16.729673       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:11:16.867186       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:11:16.873129       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 03:11:16.874028       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:11:16.878109       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:11:17.281696       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:11:17.790258       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:11:17.801913       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:11:17.810369       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:11:23.082653       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:11:23.185104       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:11:23.389366       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:11:23.397921       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1124 03:11:45.671266       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:54018: use of closed network connection
	
	
	==> kube-controller-manager [02ef92461ea103dbba103ab9ed078908bd45e3f21c52638a78e1cdf2ac05b0be] <==
	I1124 03:11:22.281576       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:11:22.281605       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:11:22.281609       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 03:11:22.281624       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 03:11:22.281611       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 03:11:22.281648       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 03:11:22.281688       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:11:22.281694       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 03:11:22.281706       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 03:11:22.281730       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:11:22.281741       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:11:22.281762       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:11:22.282437       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 03:11:22.283132       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 03:11:22.283252       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 03:11:22.284522       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 03:11:22.285662       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:11:22.285696       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:11:22.285822       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 03:11:22.288103       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:11:22.289930       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:11:22.291063       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 03:11:22.296286       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 03:11:22.305785       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:11:37.235167       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cfa2d2b3699588993c2eed63cc8784ba548bac9499fb3d27266f1cc767a39ce3] <==
	I1124 03:11:23.749805       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:11:23.820170       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:11:23.920913       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:11:23.920964       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 03:11:23.921078       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:11:23.945544       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:11:23.945621       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:11:23.952213       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:11:23.952718       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:11:23.952754       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:11:23.954389       1 config.go:200] "Starting service config controller"
	I1124 03:11:23.954410       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:11:23.954523       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:11:23.954555       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:11:23.955016       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:11:23.955035       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:11:23.955240       1 config.go:309] "Starting node config controller"
	I1124 03:11:23.955256       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:11:23.955264       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:11:24.054602       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:11:24.055376       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:11:24.055404       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [efc2dc9f8b9427b02ba1b6028a13450c81c47194a50eb14e3f77f864c5bf77ad] <==
	E1124 03:11:15.392561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:11:15.392643       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:11:15.392685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:11:15.392721       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:11:15.392738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:11:15.392813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:11:15.392831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:11:15.392953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:11:15.393064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:11:15.393105       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:11:15.393132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:11:15.393254       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:11:16.221329       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:11:16.296708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:11:16.313957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:11:16.328339       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:11:16.412083       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:11:16.443060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:11:16.458015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:11:16.467867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:11:16.468837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:11:16.497186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:11:16.520180       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:11:16.524101       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1124 03:11:16.988593       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:11:18 no-preload-603010 kubelet[2241]: I1124 03:11:18.695682    2241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-603010" podStartSLOduration=2.695660082 podStartE2EDuration="2.695660082s" podCreationTimestamp="2025-11-24 03:11:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:18.69128078 +0000 UTC m=+1.163370770" watchObservedRunningTime="2025-11-24 03:11:18.695660082 +0000 UTC m=+1.167750065"
	Nov 24 03:11:18 no-preload-603010 kubelet[2241]: I1124 03:11:18.712101    2241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-603010" podStartSLOduration=1.712073106 podStartE2EDuration="1.712073106s" podCreationTimestamp="2025-11-24 03:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:18.711761713 +0000 UTC m=+1.183851707" watchObservedRunningTime="2025-11-24 03:11:18.712073106 +0000 UTC m=+1.184163101"
	Nov 24 03:11:18 no-preload-603010 kubelet[2241]: I1124 03:11:18.726759    2241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-603010" podStartSLOduration=1.726725903 podStartE2EDuration="1.726725903s" podCreationTimestamp="2025-11-24 03:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:18.725816101 +0000 UTC m=+1.197906095" watchObservedRunningTime="2025-11-24 03:11:18.726725903 +0000 UTC m=+1.198815900"
	Nov 24 03:11:22 no-preload-603010 kubelet[2241]: I1124 03:11:22.345592    2241 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 03:11:22 no-preload-603010 kubelet[2241]: I1124 03:11:22.346977    2241 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:11:23 no-preload-603010 kubelet[2241]: I1124 03:11:23.095443    2241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-603010" podStartSLOduration=6.095416332 podStartE2EDuration="6.095416332s" podCreationTimestamp="2025-11-24 03:11:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:18.743983924 +0000 UTC m=+1.216073919" watchObservedRunningTime="2025-11-24 03:11:23.095416332 +0000 UTC m=+5.567506328"
	Nov 24 03:11:23 no-preload-603010 kubelet[2241]: I1124 03:11:23.136590    2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8b75c64-2a2e-4d0c-b1f7-fe242b173db7-xtables-lock\") pod \"kube-proxy-swj6c\" (UID: \"b8b75c64-2a2e-4d0c-b1f7-fe242b173db7\") " pod="kube-system/kube-proxy-swj6c"
	Nov 24 03:11:23 no-preload-603010 kubelet[2241]: I1124 03:11:23.136633    2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8b75c64-2a2e-4d0c-b1f7-fe242b173db7-lib-modules\") pod \"kube-proxy-swj6c\" (UID: \"b8b75c64-2a2e-4d0c-b1f7-fe242b173db7\") " pod="kube-system/kube-proxy-swj6c"
	Nov 24 03:11:23 no-preload-603010 kubelet[2241]: I1124 03:11:23.136655    2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8d791b5-f165-42db-8345-cdf52ce933d5-lib-modules\") pod \"kindnet-7gvgm\" (UID: \"a8d791b5-f165-42db-8345-cdf52ce933d5\") " pod="kube-system/kindnet-7gvgm"
	Nov 24 03:11:23 no-preload-603010 kubelet[2241]: I1124 03:11:23.136681    2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6gbb\" (UniqueName: \"kubernetes.io/projected/a8d791b5-f165-42db-8345-cdf52ce933d5-kube-api-access-g6gbb\") pod \"kindnet-7gvgm\" (UID: \"a8d791b5-f165-42db-8345-cdf52ce933d5\") " pod="kube-system/kindnet-7gvgm"
	Nov 24 03:11:23 no-preload-603010 kubelet[2241]: I1124 03:11:23.136711    2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b8b75c64-2a2e-4d0c-b1f7-fe242b173db7-kube-proxy\") pod \"kube-proxy-swj6c\" (UID: \"b8b75c64-2a2e-4d0c-b1f7-fe242b173db7\") " pod="kube-system/kube-proxy-swj6c"
	Nov 24 03:11:23 no-preload-603010 kubelet[2241]: I1124 03:11:23.136734    2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kflpz\" (UniqueName: \"kubernetes.io/projected/b8b75c64-2a2e-4d0c-b1f7-fe242b173db7-kube-api-access-kflpz\") pod \"kube-proxy-swj6c\" (UID: \"b8b75c64-2a2e-4d0c-b1f7-fe242b173db7\") " pod="kube-system/kube-proxy-swj6c"
	Nov 24 03:11:23 no-preload-603010 kubelet[2241]: I1124 03:11:23.136766    2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a8d791b5-f165-42db-8345-cdf52ce933d5-cni-cfg\") pod \"kindnet-7gvgm\" (UID: \"a8d791b5-f165-42db-8345-cdf52ce933d5\") " pod="kube-system/kindnet-7gvgm"
	Nov 24 03:11:23 no-preload-603010 kubelet[2241]: I1124 03:11:23.136813    2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8d791b5-f165-42db-8345-cdf52ce933d5-xtables-lock\") pod \"kindnet-7gvgm\" (UID: \"a8d791b5-f165-42db-8345-cdf52ce933d5\") " pod="kube-system/kindnet-7gvgm"
	Nov 24 03:11:24 no-preload-603010 kubelet[2241]: I1124 03:11:24.691785    2241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-swj6c" podStartSLOduration=1.6917640330000001 podStartE2EDuration="1.691764033s" podCreationTimestamp="2025-11-24 03:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:24.681166034 +0000 UTC m=+7.153256028" watchObservedRunningTime="2025-11-24 03:11:24.691764033 +0000 UTC m=+7.163854030"
	Nov 24 03:11:26 no-preload-603010 kubelet[2241]: I1124 03:11:26.544718    2241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7gvgm" podStartSLOduration=1.520284593 podStartE2EDuration="3.544696223s" podCreationTimestamp="2025-11-24 03:11:23 +0000 UTC" firstStartedPulling="2025-11-24 03:11:23.415400314 +0000 UTC m=+5.887490306" lastFinishedPulling="2025-11-24 03:11:25.439811962 +0000 UTC m=+7.911901936" observedRunningTime="2025-11-24 03:11:25.68483571 +0000 UTC m=+8.156925716" watchObservedRunningTime="2025-11-24 03:11:26.544696223 +0000 UTC m=+9.016786217"
	Nov 24 03:11:36 no-preload-603010 kubelet[2241]: I1124 03:11:36.229797    2241 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:11:36 no-preload-603010 kubelet[2241]: I1124 03:11:36.328311    2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jxbd\" (UniqueName: \"kubernetes.io/projected/332b95a2-035a-46f2-95ee-1bef73dff6a7-kube-api-access-6jxbd\") pod \"storage-provisioner\" (UID: \"332b95a2-035a-46f2-95ee-1bef73dff6a7\") " pod="kube-system/storage-provisioner"
	Nov 24 03:11:36 no-preload-603010 kubelet[2241]: I1124 03:11:36.328343    2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bafc3685-6d22-404d-aedc-6f9d15506617-config-volume\") pod \"coredns-66bc5c9577-9n5xf\" (UID: \"bafc3685-6d22-404d-aedc-6f9d15506617\") " pod="kube-system/coredns-66bc5c9577-9n5xf"
	Nov 24 03:11:36 no-preload-603010 kubelet[2241]: I1124 03:11:36.328368    2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/332b95a2-035a-46f2-95ee-1bef73dff6a7-tmp\") pod \"storage-provisioner\" (UID: \"332b95a2-035a-46f2-95ee-1bef73dff6a7\") " pod="kube-system/storage-provisioner"
	Nov 24 03:11:36 no-preload-603010 kubelet[2241]: I1124 03:11:36.328384    2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hg28\" (UniqueName: \"kubernetes.io/projected/bafc3685-6d22-404d-aedc-6f9d15506617-kube-api-access-8hg28\") pod \"coredns-66bc5c9577-9n5xf\" (UID: \"bafc3685-6d22-404d-aedc-6f9d15506617\") " pod="kube-system/coredns-66bc5c9577-9n5xf"
	Nov 24 03:11:36 no-preload-603010 kubelet[2241]: I1124 03:11:36.712002    2241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9n5xf" podStartSLOduration=13.711982182 podStartE2EDuration="13.711982182s" podCreationTimestamp="2025-11-24 03:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:36.702207707 +0000 UTC m=+19.174297701" watchObservedRunningTime="2025-11-24 03:11:36.711982182 +0000 UTC m=+19.184072180"
	Nov 24 03:11:36 no-preload-603010 kubelet[2241]: I1124 03:11:36.720163    2241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.720145168 podStartE2EDuration="13.720145168s" podCreationTimestamp="2025-11-24 03:11:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:11:36.719722003 +0000 UTC m=+19.191811997" watchObservedRunningTime="2025-11-24 03:11:36.720145168 +0000 UTC m=+19.192235163"
	Nov 24 03:11:38 no-preload-603010 kubelet[2241]: I1124 03:11:38.640681    2241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xjc8\" (UniqueName: \"kubernetes.io/projected/0d4cbf8f-cfc7-4e80-badf-b1b840617547-kube-api-access-5xjc8\") pod \"busybox\" (UID: \"0d4cbf8f-cfc7-4e80-badf-b1b840617547\") " pod="default/busybox"
	Nov 24 03:11:39 no-preload-603010 kubelet[2241]: I1124 03:11:39.708278    2241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.092028693 podStartE2EDuration="1.708260724s" podCreationTimestamp="2025-11-24 03:11:38 +0000 UTC" firstStartedPulling="2025-11-24 03:11:38.909222402 +0000 UTC m=+21.381312388" lastFinishedPulling="2025-11-24 03:11:39.525454445 +0000 UTC m=+21.997544419" observedRunningTime="2025-11-24 03:11:39.707876776 +0000 UTC m=+22.179966771" watchObservedRunningTime="2025-11-24 03:11:39.708260724 +0000 UTC m=+22.180350718"
	
	
	==> storage-provisioner [233348f0f774fecda6fae38864bc5a7012ebc01a44db771326bc561d3c42a90d] <==
	I1124 03:11:36.607095       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:11:36.615911       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:11:36.615960       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:11:36.619712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:36.624159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:11:36.624344       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:11:36.624500       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-603010_d2c98f4e-d759-447f-8de4-4f55069f1e96!
	I1124 03:11:36.624497       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5b777d2-712b-44e4-a3bf-a14213c57432", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-603010_d2c98f4e-d759-447f-8de4-4f55069f1e96 became leader
	W1124 03:11:36.626456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:36.630646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:11:36.725341       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-603010_d2c98f4e-d759-447f-8de4-4f55069f1e96!
	W1124 03:11:38.633500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:38.637412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:40.640054       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:40.643652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:42.646337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:42.650961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:44.654538       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:44.658642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:46.661409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:11:46.665004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-603010 -n no-preload-603010
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-603010 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.93s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-438041 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-438041 --alsologtostderr -v=1: exit status 80 (2.340897487s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-438041 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:12:00.362814  655309 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:12:00.363085  655309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:00.363095  655309 out.go:374] Setting ErrFile to fd 2...
	I1124 03:12:00.363099  655309 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:00.363295  655309 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:12:00.363531  655309 out.go:368] Setting JSON to false
	I1124 03:12:00.363554  655309 mustload.go:66] Loading cluster: newest-cni-438041
	I1124 03:12:00.363906  655309 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:00.364270  655309 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:12:00.382414  655309 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:12:00.382628  655309 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:00.440326  655309 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:87 OomKillDisable:false NGoroutines:92 SystemTime:2025-11-24 03:12:00.429802854 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:00.441053  655309 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763935228-21975/minikube-v1.37.0-1763935228-21975-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763935228-21975-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-438041 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 03:12:00.442769  655309 out.go:179] * Pausing node newest-cni-438041 ... 
	I1124 03:12:00.443801  655309 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:12:00.444179  655309 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:00.444254  655309 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:12:00.464775  655309 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:12:00.565961  655309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:00.577814  655309 pause.go:52] kubelet running: true
	I1124 03:12:00.577883  655309 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:12:00.717395  655309 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:12:00.717515  655309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:12:00.788250  655309 cri.go:89] found id: "e62bf0a89aa63020c936ad6a03e6c0480301613e4cd0a960cd58ad2ac3fe719f"
	I1124 03:12:00.788277  655309 cri.go:89] found id: "ca8f3e49d19f2e01029131d694cb2f3366da510e6fc4954a37c6be9f877ca0a4"
	I1124 03:12:00.788284  655309 cri.go:89] found id: "0903674c0ff17f5f88d257aea9b1e2cf56ff9103105cdbeb4e86732b145c0bef"
	I1124 03:12:00.788291  655309 cri.go:89] found id: "5dcec9dda2453f45f4516eff019d2077d2052e95c11d896705f53b3ac53c11a9"
	I1124 03:12:00.788296  655309 cri.go:89] found id: "453c0dc25dde51ccdc58f6043d75d117dc72d3b347ea5068c17db0082002c0ad"
	I1124 03:12:00.788302  655309 cri.go:89] found id: "a629768f55496c2969d757c473189f52d99ddea90e0a365150097df5fe2ec9e2"
	I1124 03:12:00.788306  655309 cri.go:89] found id: ""
	I1124 03:12:00.788348  655309 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:12:00.800161  655309 retry.go:31] will retry after 293.006481ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:00Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:01.093696  655309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:01.106648  655309 pause.go:52] kubelet running: false
	I1124 03:12:01.106695  655309 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:12:01.218384  655309 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:12:01.218463  655309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:12:01.281666  655309 cri.go:89] found id: "e62bf0a89aa63020c936ad6a03e6c0480301613e4cd0a960cd58ad2ac3fe719f"
	I1124 03:12:01.281691  655309 cri.go:89] found id: "ca8f3e49d19f2e01029131d694cb2f3366da510e6fc4954a37c6be9f877ca0a4"
	I1124 03:12:01.281695  655309 cri.go:89] found id: "0903674c0ff17f5f88d257aea9b1e2cf56ff9103105cdbeb4e86732b145c0bef"
	I1124 03:12:01.281699  655309 cri.go:89] found id: "5dcec9dda2453f45f4516eff019d2077d2052e95c11d896705f53b3ac53c11a9"
	I1124 03:12:01.281704  655309 cri.go:89] found id: "453c0dc25dde51ccdc58f6043d75d117dc72d3b347ea5068c17db0082002c0ad"
	I1124 03:12:01.281709  655309 cri.go:89] found id: "a629768f55496c2969d757c473189f52d99ddea90e0a365150097df5fe2ec9e2"
	I1124 03:12:01.281713  655309 cri.go:89] found id: ""
	I1124 03:12:01.281762  655309 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:12:01.293226  655309 retry.go:31] will retry after 229.266751ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:01Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:01.522643  655309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:01.535401  655309 pause.go:52] kubelet running: false
	I1124 03:12:01.535447  655309 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:12:01.647164  655309 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:12:01.647261  655309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:12:01.714647  655309 cri.go:89] found id: "e62bf0a89aa63020c936ad6a03e6c0480301613e4cd0a960cd58ad2ac3fe719f"
	I1124 03:12:01.714668  655309 cri.go:89] found id: "ca8f3e49d19f2e01029131d694cb2f3366da510e6fc4954a37c6be9f877ca0a4"
	I1124 03:12:01.714672  655309 cri.go:89] found id: "0903674c0ff17f5f88d257aea9b1e2cf56ff9103105cdbeb4e86732b145c0bef"
	I1124 03:12:01.714676  655309 cri.go:89] found id: "5dcec9dda2453f45f4516eff019d2077d2052e95c11d896705f53b3ac53c11a9"
	I1124 03:12:01.714678  655309 cri.go:89] found id: "453c0dc25dde51ccdc58f6043d75d117dc72d3b347ea5068c17db0082002c0ad"
	I1124 03:12:01.714682  655309 cri.go:89] found id: "a629768f55496c2969d757c473189f52d99ddea90e0a365150097df5fe2ec9e2"
	I1124 03:12:01.714684  655309 cri.go:89] found id: ""
	I1124 03:12:01.714729  655309 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:12:01.726169  655309 retry.go:31] will retry after 703.259213ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:01Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:02.430010  655309 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:02.442899  655309 pause.go:52] kubelet running: false
	I1124 03:12:02.442959  655309 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:12:02.561152  655309 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:12:02.561229  655309 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:12:02.624326  655309 cri.go:89] found id: "e62bf0a89aa63020c936ad6a03e6c0480301613e4cd0a960cd58ad2ac3fe719f"
	I1124 03:12:02.624346  655309 cri.go:89] found id: "ca8f3e49d19f2e01029131d694cb2f3366da510e6fc4954a37c6be9f877ca0a4"
	I1124 03:12:02.624352  655309 cri.go:89] found id: "0903674c0ff17f5f88d257aea9b1e2cf56ff9103105cdbeb4e86732b145c0bef"
	I1124 03:12:02.624356  655309 cri.go:89] found id: "5dcec9dda2453f45f4516eff019d2077d2052e95c11d896705f53b3ac53c11a9"
	I1124 03:12:02.624360  655309 cri.go:89] found id: "453c0dc25dde51ccdc58f6043d75d117dc72d3b347ea5068c17db0082002c0ad"
	I1124 03:12:02.624365  655309 cri.go:89] found id: "a629768f55496c2969d757c473189f52d99ddea90e0a365150097df5fe2ec9e2"
	I1124 03:12:02.624369  655309 cri.go:89] found id: ""
	I1124 03:12:02.624412  655309 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:12:02.637975  655309 out.go:203] 
	W1124 03:12:02.639077  655309 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:12:02.639103  655309 out.go:285] * 
	* 
	W1124 03:12:02.643975  655309 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:12:02.645053  655309 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-438041 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-438041
helpers_test.go:243: (dbg) docker inspect newest-cni-438041:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64",
	        "Created": "2025-11-24T03:11:03.961758173Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 651783,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:11:49.529161449Z",
	            "FinishedAt": "2025-11-24T03:11:48.62049328Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64/hostname",
	        "HostsPath": "/var/lib/docker/containers/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64/hosts",
	        "LogPath": "/var/lib/docker/containers/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64-json.log",
	        "Name": "/newest-cni-438041",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-438041:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-438041",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64",
	                "LowerDir": "/var/lib/docker/overlay2/0710d621fdc7311911f475a554b8f76b82b86a9db0b0e85e3045f0e2074e3cc1-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0710d621fdc7311911f475a554b8f76b82b86a9db0b0e85e3045f0e2074e3cc1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0710d621fdc7311911f475a554b8f76b82b86a9db0b0e85e3045f0e2074e3cc1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0710d621fdc7311911f475a554b8f76b82b86a9db0b0e85e3045f0e2074e3cc1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-438041",
	                "Source": "/var/lib/docker/volumes/newest-cni-438041/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-438041",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-438041",
	                "name.minikube.sigs.k8s.io": "newest-cni-438041",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "247d5e804118e132e665508e9478d80a036c20ab09eab525bfdf6959cd6a6736",
	            "SandboxKey": "/var/run/docker/netns/247d5e804118",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-438041": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b30d540ef88b055a6ad3cc188fd27395739f217150ea48ac734e123a015ff9c1",
	                    "EndpointID": "d258b0a220bf40ae620b50ee478407904848ef9f87b20ac197bcfb380c12a70e",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "06:25:0b:77:12:40",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-438041",
	                        "7dcb0e0e285e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438041 -n newest-cni-438041
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438041 -n newest-cni-438041: exit status 2 (308.676767ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-438041 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p flannel-965704 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │                     │
	│ ssh     │ -p flannel-965704 sudo systemctl cat containerd --no-pager                                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                             │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /etc/containerd/config.toml                                                                                                                                                                                        │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo containerd config dump                                                                                                                                                                                                 │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo crio config                                                                                                                                                                                                            │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ delete  │ -p flannel-965704                                                                                                                                                                                                                             │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:11 UTC │
	│ addons  │ enable metrics-server -p newest-cni-438041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-579951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p newest-cni-438041 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ stop    │ -p old-k8s-version-579951 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-603010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993813 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p no-preload-603010 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-579951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p old-k8s-version-579951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-438041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ image   │ newest-cni-438041 image list --format=json                                                                                                                                                                                                    │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p newest-cni-438041 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:11:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:11:49.286034  651375 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:11:49.286341  651375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:11:49.286354  651375 out.go:374] Setting ErrFile to fd 2...
	I1124 03:11:49.286361  651375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:11:49.286680  651375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:11:49.287212  651375 out.go:368] Setting JSON to false
	I1124 03:11:49.288472  651375 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6856,"bootTime":1763947053,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:11:49.288536  651375 start.go:143] virtualization: kvm guest
	I1124 03:11:49.290525  651375 out.go:179] * [newest-cni-438041] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:11:49.291632  651375 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:11:49.291682  651375 notify.go:221] Checking for updates...
	I1124 03:11:49.293520  651375 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:11:49.295168  651375 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:49.296316  651375 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:11:49.297301  651375 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:11:49.298347  651375 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:11:49.299864  651375 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:49.300465  651375 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:11:49.326011  651375 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:11:49.326117  651375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:11:49.389197  651375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-24 03:11:49.379237015 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:11:49.389384  651375 docker.go:319] overlay module found
	I1124 03:11:49.391058  651375 out.go:179] * Using the docker driver based on existing profile
	I1124 03:11:49.394787  651375 start.go:309] selected driver: docker
	I1124 03:11:49.394805  651375 start.go:927] validating driver "docker" against &{Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:49.394909  651375 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:11:49.395601  651375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:11:49.456410  651375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-24 03:11:49.446909447 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:11:49.456803  651375 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 03:11:49.456862  651375 cni.go:84] Creating CNI manager for ""
	I1124 03:11:49.456959  651375 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:49.457023  651375 start.go:353] cluster config:
	{Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:49.459312  651375 out.go:179] * Starting "newest-cni-438041" primary control-plane node in "newest-cni-438041" cluster
	I1124 03:11:49.460297  651375 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:11:49.461390  651375 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:11:49.462323  651375 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:11:49.462354  651375 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:11:49.462369  651375 cache.go:65] Caching tarball of preloaded images
	I1124 03:11:49.462426  651375 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:11:49.462483  651375 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:11:49.462500  651375 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:11:49.462642  651375 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json ...
	I1124 03:11:49.483910  651375 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:11:49.483934  651375 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:11:49.483955  651375 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:11:49.484000  651375 start.go:360] acquireMachinesLock for newest-cni-438041: {Name:mk895e89056f5ce7564002ba75457dcfde41ce4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:11:49.484055  651375 start.go:364] duration metric: took 37.469µs to acquireMachinesLock for "newest-cni-438041"
	I1124 03:11:49.484071  651375 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:11:49.484076  651375 fix.go:54] fixHost starting: 
	I1124 03:11:49.484279  651375 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:49.501819  651375 fix.go:112] recreateIfNeeded on newest-cni-438041: state=Stopped err=<nil>
	W1124 03:11:49.501854  651375 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:11:48.543265  650744 out.go:252] * Restarting existing docker container for "old-k8s-version-579951" ...
	I1124 03:11:48.543327  650744 cli_runner.go:164] Run: docker start old-k8s-version-579951
	I1124 03:11:49.018869  650744 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:49.037977  650744 kic.go:430] container "old-k8s-version-579951" state is running.
	I1124 03:11:49.038445  650744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-579951
	I1124 03:11:49.058457  650744 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/config.json ...
	I1124 03:11:49.058699  650744 machine.go:94] provisionDockerMachine start ...
	I1124 03:11:49.058779  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:49.079127  650744 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:49.079531  650744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1124 03:11:49.079548  650744 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:11:49.080281  650744 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35748->127.0.0.1:33478: read: connection reset by peer
	I1124 03:11:52.216396  650744 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-579951
	
	I1124 03:11:52.216428  650744 ubuntu.go:182] provisioning hostname "old-k8s-version-579951"
	I1124 03:11:52.216485  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:52.234578  650744 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:52.234796  650744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1124 03:11:52.234808  650744 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-579951 && echo "old-k8s-version-579951" | sudo tee /etc/hostname
	I1124 03:11:52.379425  650744 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-579951
	
	I1124 03:11:52.379486  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:52.396899  650744 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:52.397126  650744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1124 03:11:52.397151  650744 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-579951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-579951/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-579951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:11:52.532664  650744 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:11:52.532691  650744 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:11:52.532712  650744 ubuntu.go:190] setting up certificates
	I1124 03:11:52.532732  650744 provision.go:84] configureAuth start
	I1124 03:11:52.532805  650744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-579951
	I1124 03:11:52.549693  650744 provision.go:143] copyHostCerts
	I1124 03:11:52.549771  650744 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:11:52.549790  650744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:11:52.549866  650744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:11:52.550013  650744 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:11:52.550027  650744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:11:52.550070  650744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:11:52.550178  650744 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:11:52.550190  650744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:11:52.550229  650744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:11:52.550328  650744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-579951 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-579951]
	I1124 03:11:52.615306  650744 provision.go:177] copyRemoteCerts
	I1124 03:11:52.615355  650744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:11:52.615407  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:52.632115  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:52.729274  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 03:11:52.745792  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:11:52.762196  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:11:52.778250  650744 provision.go:87] duration metric: took 245.489826ms to configureAuth
	I1124 03:11:52.778274  650744 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:11:52.778434  650744 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:11:52.778558  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:52.795721  650744 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:52.795982  650744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1124 03:11:52.796014  650744 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:11:53.108304  650744 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:11:53.108332  650744 machine.go:97] duration metric: took 4.049613812s to provisionDockerMachine
	I1124 03:11:53.108346  650744 start.go:293] postStartSetup for "old-k8s-version-579951" (driver="docker")
	I1124 03:11:53.108359  650744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:11:53.108417  650744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:11:53.108463  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:53.128841  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:53.228947  650744 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:11:53.232304  650744 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:11:53.232329  650744 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:11:53.232339  650744 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:11:53.232380  650744 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:11:53.232489  650744 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:11:53.232630  650744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:11:53.239802  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:53.257964  650744 start.go:296] duration metric: took 149.602953ms for postStartSetup
	I1124 03:11:53.258035  650744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:11:53.258075  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:53.276087  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:49.503558  651375 out.go:252] * Restarting existing docker container for "newest-cni-438041" ...
	I1124 03:11:49.503632  651375 cli_runner.go:164] Run: docker start newest-cni-438041
	I1124 03:11:49.775057  651375 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:49.792897  651375 kic.go:430] container "newest-cni-438041" state is running.
	I1124 03:11:49.793249  651375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:49.810414  651375 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json ...
	I1124 03:11:49.810621  651375 machine.go:94] provisionDockerMachine start ...
	I1124 03:11:49.810704  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:49.827728  651375 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:49.828028  651375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1124 03:11:49.828047  651375 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:11:49.828727  651375 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54342->127.0.0.1:33483: read: connection reset by peer
	I1124 03:11:52.965403  651375 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-438041
	
	I1124 03:11:52.965440  651375 ubuntu.go:182] provisioning hostname "newest-cni-438041"
	I1124 03:11:52.965504  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:52.984685  651375 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:52.985019  651375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1124 03:11:52.985038  651375 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-438041 && echo "newest-cni-438041" | sudo tee /etc/hostname
	I1124 03:11:53.137644  651375 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-438041
	
	I1124 03:11:53.137724  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:53.155256  651375 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:53.155466  651375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1124 03:11:53.155486  651375 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-438041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-438041/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-438041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:11:53.293217  651375 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:11:53.293258  651375 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:11:53.293286  651375 ubuntu.go:190] setting up certificates
	I1124 03:11:53.293297  651375 provision.go:84] configureAuth start
	I1124 03:11:53.293346  651375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:53.311194  651375 provision.go:143] copyHostCerts
	I1124 03:11:53.311248  651375 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:11:53.311265  651375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:11:53.311322  651375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:11:53.311414  651375 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:11:53.311423  651375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:11:53.311449  651375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:11:53.311514  651375 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:11:53.311528  651375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:11:53.311551  651375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:11:53.311617  651375 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.newest-cni-438041 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-438041]
	I1124 03:11:53.385091  651375 provision.go:177] copyRemoteCerts
	I1124 03:11:53.385135  651375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:11:53.385171  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:53.403988  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:53.504556  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:11:53.521266  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:11:53.537701  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:11:53.554294  651375 provision.go:87] duration metric: took 260.985811ms to configureAuth
	I1124 03:11:53.554314  651375 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:11:53.554467  651375 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:53.554556  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:53.573458  651375 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:53.573772  651375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1124 03:11:53.573798  651375 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:11:53.876636  651375 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:11:53.876661  651375 machine.go:97] duration metric: took 4.066025512s to provisionDockerMachine
	I1124 03:11:53.876676  651375 start.go:293] postStartSetup for "newest-cni-438041" (driver="docker")
	I1124 03:11:53.876690  651375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:11:53.876748  651375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:11:53.876833  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:53.901478  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:54.002635  651375 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:11:54.006142  651375 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:11:54.006170  651375 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:11:54.006184  651375 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:11:54.006247  651375 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:11:54.006316  651375 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:11:54.006400  651375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:11:54.014092  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:54.031386  651375 start.go:296] duration metric: took 154.69471ms for postStartSetup
	I1124 03:11:54.031487  651375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:11:54.031538  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:54.055932  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:54.155460  651375 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:11:54.160082  651375 fix.go:56] duration metric: took 4.676000022s for fixHost
	I1124 03:11:54.160106  651375 start.go:83] releasing machines lock for "newest-cni-438041", held for 4.676040415s
	I1124 03:11:54.160165  651375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:54.177706  651375 ssh_runner.go:195] Run: cat /version.json
	I1124 03:11:54.177749  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:54.177800  651375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:11:54.177875  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:54.201769  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:54.202803  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:53.370290  650744 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:11:53.374610  650744 fix.go:56] duration metric: took 4.849001876s for fixHost
	I1124 03:11:53.374635  650744 start.go:83] releasing machines lock for "old-k8s-version-579951", held for 4.849045163s
	I1124 03:11:53.374701  650744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-579951
	I1124 03:11:53.391284  650744 ssh_runner.go:195] Run: cat /version.json
	I1124 03:11:53.391357  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:53.391387  650744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:11:53.391455  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:53.410218  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:53.410760  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:53.559012  650744 ssh_runner.go:195] Run: systemctl --version
	I1124 03:11:53.565240  650744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:11:53.599725  650744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:11:53.604385  650744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:11:53.604445  650744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:11:53.612174  650744 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:11:53.612203  650744 start.go:496] detecting cgroup driver to use...
	I1124 03:11:53.612238  650744 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:11:53.612278  650744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:11:53.626624  650744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:11:53.638422  650744 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:11:53.638469  650744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:11:53.651771  650744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:11:53.663312  650744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:11:53.742876  650744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:11:53.829646  650744 docker.go:234] disabling docker service ...
	I1124 03:11:53.829715  650744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:11:53.843791  650744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:11:53.855910  650744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:11:53.942002  650744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:11:54.032167  650744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:11:54.050243  650744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:11:54.067047  650744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1124 03:11:54.067113  650744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.077670  650744 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:11:54.077746  650744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.087852  650744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.097146  650744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.106088  650744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:11:54.115186  650744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.123913  650744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.132185  650744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.140989  650744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:11:54.148242  650744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:11:54.155386  650744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:54.241274  650744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:11:54.374173  650744 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:11:54.374240  650744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:11:54.378159  650744 start.go:564] Will wait 60s for crictl version
	I1124 03:11:54.378214  650744 ssh_runner.go:195] Run: which crictl
	I1124 03:11:54.382186  650744 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:11:54.407341  650744 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:11:54.407411  650744 ssh_runner.go:195] Run: crio --version
	I1124 03:11:54.435584  650744 ssh_runner.go:195] Run: crio --version
	I1124 03:11:54.466574  650744 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1124 03:11:54.354645  651375 ssh_runner.go:195] Run: systemctl --version
	I1124 03:11:54.361151  651375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:11:54.396538  651375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:11:54.401291  651375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:11:54.401363  651375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:11:54.410052  651375 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:11:54.410074  651375 start.go:496] detecting cgroup driver to use...
	I1124 03:11:54.410102  651375 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:11:54.410175  651375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:11:54.424400  651375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:11:54.436581  651375 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:11:54.436641  651375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:11:54.451281  651375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:11:54.464491  651375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:11:54.544542  651375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:11:54.636859  651375 docker.go:234] disabling docker service ...
	I1124 03:11:54.636935  651375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:11:54.650564  651375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:11:54.662584  651375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:11:54.750225  651375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:11:54.837494  651375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:11:54.851435  651375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:11:54.868586  651375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:11:54.868644  651375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.879945  651375 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:11:54.880011  651375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.891096  651375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.899537  651375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.907696  651375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:11:54.915645  651375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.924063  651375 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.932333  651375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.940494  651375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:11:54.947799  651375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:11:54.955005  651375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:55.036032  651375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:11:55.166161  651375 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:11:55.166223  651375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:11:55.169942  651375 start.go:564] Will wait 60s for crictl version
	I1124 03:11:55.169994  651375 ssh_runner.go:195] Run: which crictl
	I1124 03:11:55.174111  651375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:11:55.201399  651375 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:11:55.201458  651375 ssh_runner.go:195] Run: crio --version
	I1124 03:11:55.229114  651375 ssh_runner.go:195] Run: crio --version
	I1124 03:11:55.263353  651375 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:11:55.264458  651375 cli_runner.go:164] Run: docker network inspect newest-cni-438041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:11:55.283878  651375 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:11:55.287727  651375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:55.300854  651375 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 03:11:54.467605  650744 cli_runner.go:164] Run: docker network inspect old-k8s-version-579951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:11:54.484445  650744 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 03:11:54.488309  650744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:54.501278  650744 kubeadm.go:884] updating cluster {Name:old-k8s-version-579951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-579951 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:11:54.501423  650744 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 03:11:54.501496  650744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:54.531372  650744 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:54.531395  650744 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:11:54.531441  650744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:54.555715  650744 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:54.555734  650744 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:11:54.555741  650744 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1124 03:11:54.555839  650744 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-579951 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-579951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:54.555930  650744 ssh_runner.go:195] Run: crio config
	I1124 03:11:54.617621  650744 cni.go:84] Creating CNI manager for ""
	I1124 03:11:54.617645  650744 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:54.617665  650744 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:11:54.617695  650744 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-579951 NodeName:old-k8s-version-579951 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:54.617850  650744 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-579951"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:54.617945  650744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 03:11:54.625707  650744 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:11:54.625771  650744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:54.633259  650744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1124 03:11:54.645713  650744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:54.657930  650744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1124 03:11:54.670120  650744 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:54.673421  650744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:54.682551  650744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:54.764746  650744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:54.787574  650744 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951 for IP: 192.168.103.2
	I1124 03:11:54.787596  650744 certs.go:195] generating shared ca certs ...
	I1124 03:11:54.787617  650744 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:54.787796  650744 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:54.787857  650744 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:54.787999  650744 certs.go:257] generating profile certs ...
	I1124 03:11:54.788183  650744 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/client.key
	I1124 03:11:54.788276  650744 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/apiserver.key.e6745a5b
	I1124 03:11:54.788326  650744 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/proxy-client.key
	I1124 03:11:54.788469  650744 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:54.788513  650744 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:54.788527  650744 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:54.788569  650744 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:54.788606  650744 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:54.788636  650744 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:54.788693  650744 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:54.789558  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:54.808835  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:54.827231  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:54.845706  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:54.868181  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 03:11:54.890468  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:11:54.907700  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:54.924797  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:11:54.941639  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:54.958656  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:54.975700  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:54.997087  650744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:55.008914  650744 ssh_runner.go:195] Run: openssl version
	I1124 03:11:55.014559  650744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:55.022283  650744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:55.025806  650744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:55.025869  650744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:55.063352  650744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:55.071428  650744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:55.080766  650744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:55.084543  650744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:55.084588  650744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:55.120661  650744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:55.128811  650744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:55.137242  650744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:55.140743  650744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:55.140779  650744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:55.177906  650744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:55.186832  650744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:55.190616  650744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:11:55.227571  650744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:11:55.275193  650744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:11:55.323462  650744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:11:55.367108  650744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:11:55.425531  650744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:11:55.488993  650744 kubeadm.go:401] StartCluster: {Name:old-k8s-version-579951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-579951 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:55.489121  650744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:55.489205  650744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:55.528987  650744 cri.go:89] found id: "cc8b5ee4851c9ae1241dd77995f3d1a2e725abb08136f47c106f5adf7f25f2a7"
	I1124 03:11:55.529073  650744 cri.go:89] found id: "3176f2d8220eaa411e72fa77d582041c78e4d0b8acbd739cd01992ec3cfa7230"
	I1124 03:11:55.529090  650744 cri.go:89] found id: "30d22d684ad7501e38080ff45bbe87f71a21252754ba692fc20125e3845f807a"
	I1124 03:11:55.529115  650744 cri.go:89] found id: "3356da3bf9c8232ed305911fa37644fd0513640f4477238b1a7e39b8e438c2a0"
	I1124 03:11:55.529119  650744 cri.go:89] found id: ""
	I1124 03:11:55.529165  650744 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:11:55.543702  650744 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:11:55Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:11:55.543919  650744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:55.555836  650744 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:11:55.556700  650744 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:11:55.556774  650744 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:11:55.566242  650744 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:11:55.567163  650744 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-579951" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:55.567663  650744 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-579951" cluster setting kubeconfig missing "old-k8s-version-579951" context setting]
	I1124 03:11:55.568451  650744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:55.570408  650744 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:11:55.584022  650744 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1124 03:11:55.584094  650744 kubeadm.go:602] duration metric: took 27.380465ms to restartPrimaryControlPlane
	I1124 03:11:55.584116  650744 kubeadm.go:403] duration metric: took 95.136682ms to StartCluster
	I1124 03:11:55.584143  650744 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:55.584226  650744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:55.585768  650744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:55.585994  650744 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:55.586185  650744 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:55.586292  650744 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-579951"
	I1124 03:11:55.586318  650744 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-579951"
	W1124 03:11:55.586329  650744 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:11:55.586360  650744 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:11:55.586872  650744 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:55.587179  650744 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:11:55.587199  650744 addons.go:70] Setting dashboard=true in profile "old-k8s-version-579951"
	I1124 03:11:55.587372  650744 addons.go:239] Setting addon dashboard=true in "old-k8s-version-579951"
	W1124 03:11:55.587382  650744 addons.go:248] addon dashboard should already be in state true
	I1124 03:11:55.587406  650744 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:11:55.587869  650744 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:55.587220  650744 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-579951"
	I1124 03:11:55.588633  650744 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-579951"
	I1124 03:11:55.588966  650744 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:55.590288  650744 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:55.595231  650744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:55.621484  650744 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-579951"
	W1124 03:11:55.621610  650744 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:11:55.621688  650744 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:11:55.622417  650744 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:55.626837  650744 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:11:55.626840  650744 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:55.628128  650744 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:55.628176  650744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:55.628248  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:55.628181  650744 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:11:55.302209  651375 kubeadm.go:884] updating cluster {Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:11:55.302390  651375 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:11:55.302458  651375 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:55.338853  651375 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:55.338878  651375 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:11:55.338954  651375 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:55.366076  651375 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:55.366102  651375 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:11:55.366109  651375 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:11:55.366220  651375 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-438041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:55.366303  651375 ssh_runner.go:195] Run: crio config
	I1124 03:11:55.440125  651375 cni.go:84] Creating CNI manager for ""
	I1124 03:11:55.440166  651375 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:55.440190  651375 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 03:11:55.440304  651375 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-438041 NodeName:newest-cni-438041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:55.440585  651375 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-438041"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:55.440735  651375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:55.453216  651375 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:11:55.453431  651375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:55.467527  651375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:11:55.487696  651375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:55.504184  651375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1124 03:11:55.518982  651375 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:55.524390  651375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:55.537713  651375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:55.688666  651375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:55.719136  651375 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041 for IP: 192.168.94.2
	I1124 03:11:55.719216  651375 certs.go:195] generating shared ca certs ...
	I1124 03:11:55.719253  651375 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:55.719452  651375 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:55.719506  651375 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:55.719515  651375 certs.go:257] generating profile certs ...
	I1124 03:11:55.719641  651375 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.key
	I1124 03:11:55.719706  651375 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183
	I1124 03:11:55.719758  651375 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key
	I1124 03:11:55.719908  651375 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:55.719950  651375 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:55.719960  651375 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:55.719994  651375 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:55.720030  651375 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:55.720059  651375 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:55.720122  651375 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:55.728513  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:55.753214  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:55.773936  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:55.797324  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:55.826138  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:11:55.864774  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:11:55.890504  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:55.915534  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:11:55.936867  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:55.957784  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:55.977419  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:55.996059  651375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:56.009161  651375 ssh_runner.go:195] Run: openssl version
	I1124 03:11:56.015044  651375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:56.023454  651375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:56.027144  651375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:56.027196  651375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:56.064292  651375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:56.072189  651375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:56.080235  651375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:56.083782  651375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:56.083827  651375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:56.116705  651375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:56.124082  651375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:56.132005  651375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:56.135614  651375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:56.135659  651375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:56.169786  651375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:56.177431  651375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:56.181071  651375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:11:56.214240  651375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:11:56.247819  651375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:11:56.282593  651375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:11:56.330465  651375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:11:56.375382  651375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:11:56.429850  651375 kubeadm.go:401] StartCluster: {Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:56.429998  651375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:56.430062  651375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:56.463304  651375 cri.go:89] found id: "0903674c0ff17f5f88d257aea9b1e2cf56ff9103105cdbeb4e86732b145c0bef"
	I1124 03:11:56.463327  651375 cri.go:89] found id: "5dcec9dda2453f45f4516eff019d2077d2052e95c11d896705f53b3ac53c11a9"
	I1124 03:11:56.463333  651375 cri.go:89] found id: "453c0dc25dde51ccdc58f6043d75d117dc72d3b347ea5068c17db0082002c0ad"
	I1124 03:11:56.463337  651375 cri.go:89] found id: "a629768f55496c2969d757c473189f52d99ddea90e0a365150097df5fe2ec9e2"
	I1124 03:11:56.463341  651375 cri.go:89] found id: ""
	I1124 03:11:56.463386  651375 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:11:56.475440  651375 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:11:56Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:11:56.475515  651375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:56.484103  651375 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:11:56.484119  651375 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:11:56.484167  651375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:11:56.491423  651375 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:11:56.492180  651375 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-438041" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:56.492651  651375 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-438041" cluster setting kubeconfig missing "newest-cni-438041" context setting]
	I1124 03:11:56.493413  651375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:56.494954  651375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:11:56.502146  651375 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1124 03:11:56.502174  651375 kubeadm.go:602] duration metric: took 18.049259ms to restartPrimaryControlPlane
	I1124 03:11:56.502183  651375 kubeadm.go:403] duration metric: took 72.346726ms to StartCluster
	I1124 03:11:56.502198  651375 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:56.502266  651375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:56.503449  651375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:56.503644  651375 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:56.503802  651375 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:56.503842  651375 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:56.503924  651375 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-438041"
	I1124 03:11:56.503941  651375 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-438041"
	W1124 03:11:56.503952  651375 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:11:56.503979  651375 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:11:56.504019  651375 addons.go:70] Setting dashboard=true in profile "newest-cni-438041"
	I1124 03:11:56.504063  651375 addons.go:239] Setting addon dashboard=true in "newest-cni-438041"
	W1124 03:11:56.504077  651375 addons.go:248] addon dashboard should already be in state true
	I1124 03:11:56.504113  651375 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:11:56.504102  651375 addons.go:70] Setting default-storageclass=true in profile "newest-cni-438041"
	I1124 03:11:56.504146  651375 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-438041"
	I1124 03:11:56.504460  651375 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:56.504487  651375 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:56.504613  651375 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:56.505883  651375 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:56.507118  651375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:56.528676  651375 addons.go:239] Setting addon default-storageclass=true in "newest-cni-438041"
	W1124 03:11:56.528700  651375 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:11:56.528729  651375 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:11:56.529015  651375 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:11:56.529015  651375 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:56.529231  651375 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:56.530408  651375 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:56.530428  651375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:56.530484  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:56.531428  651375 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:11:55.629296  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:11:55.629364  650744 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:11:55.629448  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:55.653096  650744 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:55.653124  650744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:55.653185  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:55.659046  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:55.673014  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:55.694050  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:55.777052  650744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:55.794835  650744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:55.797462  650744 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-579951" to be "Ready" ...
	I1124 03:11:55.801170  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:11:55.801186  650744 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:11:55.818027  650744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:55.823370  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:11:55.823389  650744 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:11:55.857809  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:11:55.857837  650744 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:11:55.882470  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:11:55.882491  650744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:11:55.902562  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:11:55.902586  650744 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:11:55.921007  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:11:55.921028  650744 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:11:55.937245  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:11:55.937265  650744 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:11:55.951635  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:11:55.951656  650744 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:11:55.965030  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:11:55.965053  650744 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:11:55.980361  650744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:11:57.922223  650744 node_ready.go:49] node "old-k8s-version-579951" is "Ready"
	I1124 03:11:57.922259  650744 node_ready.go:38] duration metric: took 2.124761171s for node "old-k8s-version-579951" to be "Ready" ...
	I1124 03:11:57.922276  650744 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:57.922328  650744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:11:56.532483  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:11:56.532503  651375 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:11:56.532557  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:56.558942  651375 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:56.559099  651375 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:56.559488  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:56.563483  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:56.565335  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:56.581714  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:56.649131  651375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:56.661798  651375 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:56.661950  651375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:11:56.675962  651375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:56.676028  651375 api_server.go:72] duration metric: took 172.356334ms to wait for apiserver process to appear ...
	I1124 03:11:56.676041  651375 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:56.676058  651375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:11:56.678809  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:11:56.678827  651375 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:11:56.694241  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:11:56.694259  651375 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:11:56.695582  651375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:56.709532  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:11:56.709553  651375 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:11:56.725361  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:11:56.725379  651375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:11:56.742031  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:11:56.742053  651375 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:11:56.757398  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:11:56.757419  651375 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:11:56.772359  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:11:56.772376  651375 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:11:56.784906  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:11:56.784923  651375 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:11:56.800657  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:11:56.800676  651375 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:11:56.816333  651375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:11:58.429662  651375 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 03:11:58.429696  651375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 03:11:58.429715  651375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:11:58.506656  651375 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:11:58.506689  651375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:11:58.676122  651375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:11:58.682095  651375 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:11:58.682121  651375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:11:59.063524  651375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.387529341s)
	I1124 03:11:59.063633  651375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.368021884s)
	I1124 03:11:59.063735  651375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.247358354s)
	I1124 03:11:59.065554  651375 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-438041 addons enable metrics-server
	
	I1124 03:11:59.077814  651375 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 03:11:58.767994  650744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.97312073s)
	I1124 03:11:58.767996  650744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.949932356s)
	I1124 03:11:59.119301  650744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.138860206s)
	I1124 03:11:59.119326  650744 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.196967726s)
	I1124 03:11:59.119356  650744 api_server.go:72] duration metric: took 3.533325717s to wait for apiserver process to appear ...
	I1124 03:11:59.119366  650744 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:59.119389  650744 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 03:11:59.123936  650744 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-579951 addons enable metrics-server
	
	I1124 03:11:59.124802  650744 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 03:11:59.125919  650744 api_server.go:141] control plane version: v1.28.0
	I1124 03:11:59.125946  650744 api_server.go:131] duration metric: took 6.571904ms to wait for apiserver health ...
	I1124 03:11:59.125957  650744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:59.128338  650744 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 03:11:59.078988  651375 addons.go:530] duration metric: took 2.575137687s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 03:11:59.177014  651375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:11:59.182598  651375 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:11:59.182626  651375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:11:59.676406  651375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:11:59.680259  651375 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:11:59.681145  651375 api_server.go:141] control plane version: v1.34.1
	I1124 03:11:59.681173  651375 api_server.go:131] duration metric: took 3.005125501s to wait for apiserver health ...
	I1124 03:11:59.681182  651375 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:59.684160  651375 system_pods.go:59] 9 kube-system pods found
	I1124 03:11:59.684187  651375 system_pods.go:61] "coredns-66bc5c9577-b5rlp" [ec3ad010-7694-4640-9638-fe6f5c97f56a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:59.684195  651375 system_pods.go:61] "coredns-66bc5c9577-mwvq8" [c8831e7f-34c0-40c7-a728-7f7882ed604a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:59.684204  651375 system_pods.go:61] "etcd-newest-cni-438041" [7acbb753-dfd2-4438-b370-a7e38c4fbc5d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:11:59.684210  651375 system_pods.go:61] "kindnet-xp46p" [19fa7668-24bd-454c-a5df-37534a06d3a5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:11:59.684216  651375 system_pods.go:61] "kube-apiserver-newest-cni-438041" [c7d90375-f6c0-4a1f-8b80-81574119b191] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:11:59.684222  651375 system_pods.go:61] "kube-controller-manager-newest-cni-438041" [54b144f6-6f26-4e9b-818b-cbb2d7b4c0a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:11:59.684232  651375 system_pods.go:61] "kube-proxy-n85pg" [86f875e2-7efc-4b60-b031-a1de71ea7502] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:11:59.684239  651375 system_pods.go:61] "kube-scheduler-newest-cni-438041" [75e99a3a-d4a9-4428-a52a-ef5ac4edc76c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:11:59.684253  651375 system_pods.go:61] "storage-provisioner" [9a94c2f7-e288-4528-b22c-f413d79bdf46] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:59.684259  651375 system_pods.go:74] duration metric: took 3.07116ms to wait for pod list to return data ...
	I1124 03:11:59.684269  651375 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:59.686477  651375 default_sa.go:45] found service account: "default"
	I1124 03:11:59.686502  651375 default_sa.go:55] duration metric: took 2.226294ms for default service account to be created ...
	I1124 03:11:59.686514  651375 kubeadm.go:587] duration metric: took 3.182843557s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 03:11:59.686535  651375 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:59.689409  651375 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:59.689435  651375 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:59.689452  651375 node_conditions.go:105] duration metric: took 2.911385ms to run NodePressure ...
	I1124 03:11:59.689464  651375 start.go:242] waiting for startup goroutines ...
	I1124 03:11:59.689471  651375 start.go:247] waiting for cluster config update ...
	I1124 03:11:59.689483  651375 start.go:256] writing updated cluster config ...
	I1124 03:11:59.689750  651375 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:59.736246  651375 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:11:59.738970  651375 out.go:179] * Done! kubectl is now configured to use "newest-cni-438041" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.125766044Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.126027673Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-n85pg/POD" id=5ae6d350-208b-4290-92ea-99c5d6a00ea5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.126090392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.12883121Z" level=info msg="Ran pod sandbox 5fbc46437e798eb293c24a234c08d38684e883bc3fb5c526c7a9047be83255d0 with infra container: kube-system/kindnet-xp46p/POD" id=f49ed1ed-486a-412f-a01c-d708ea21a16a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.130018276Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=cd7cccbb-2936-4f53-842a-4be2e060df8a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.131378842Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5ae6d350-208b-4290-92ea-99c5d6a00ea5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.131475992Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=65c3594e-0bec-4411-a85a-3bb7fb0c943e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.132660129Z" level=info msg="Creating container: kube-system/kindnet-xp46p/kindnet-cni" id=c0fddb86-8d94-4a33-9f0c-cca989f4359e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.132747542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.133142693Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.133995431Z" level=info msg="Ran pod sandbox 7e177ac3a85fe32b36785510cafd415d883150ae1a844a6260ec2c1a42df24c2 with infra container: kube-system/kube-proxy-n85pg/POD" id=5ae6d350-208b-4290-92ea-99c5d6a00ea5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.134968586Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e01acad7-b751-4f93-8284-47a83fdf7151 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.136106499Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=40675711-f351-4aff-9773-e7149c24a6a4 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.136813745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.137425414Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.137503667Z" level=info msg="Creating container: kube-system/kube-proxy-n85pg/kube-proxy" id=a248f46b-2cb7-476b-9ca6-d34f51c589c1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.137621439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.141526503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.141932561Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.163702042Z" level=info msg="Created container ca8f3e49d19f2e01029131d694cb2f3366da510e6fc4954a37c6be9f877ca0a4: kube-system/kindnet-xp46p/kindnet-cni" id=c0fddb86-8d94-4a33-9f0c-cca989f4359e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.164242999Z" level=info msg="Starting container: ca8f3e49d19f2e01029131d694cb2f3366da510e6fc4954a37c6be9f877ca0a4" id=972abd6d-a0a0-45c4-837e-d7dfff2d0e10 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.165996819Z" level=info msg="Started container" PID=1043 containerID=ca8f3e49d19f2e01029131d694cb2f3366da510e6fc4954a37c6be9f877ca0a4 description=kube-system/kindnet-xp46p/kindnet-cni id=972abd6d-a0a0-45c4-837e-d7dfff2d0e10 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fbc46437e798eb293c24a234c08d38684e883bc3fb5c526c7a9047be83255d0
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.168524811Z" level=info msg="Created container e62bf0a89aa63020c936ad6a03e6c0480301613e4cd0a960cd58ad2ac3fe719f: kube-system/kube-proxy-n85pg/kube-proxy" id=a248f46b-2cb7-476b-9ca6-d34f51c589c1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.169031946Z" level=info msg="Starting container: e62bf0a89aa63020c936ad6a03e6c0480301613e4cd0a960cd58ad2ac3fe719f" id=c26ba391-fe77-4970-987b-86f108227528 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.172015469Z" level=info msg="Started container" PID=1044 containerID=e62bf0a89aa63020c936ad6a03e6c0480301613e4cd0a960cd58ad2ac3fe719f description=kube-system/kube-proxy-n85pg/kube-proxy id=c26ba391-fe77-4970-987b-86f108227528 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7e177ac3a85fe32b36785510cafd415d883150ae1a844a6260ec2c1a42df24c2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e62bf0a89aa63       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   7e177ac3a85fe       kube-proxy-n85pg                            kube-system
	ca8f3e49d19f2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   5fbc46437e798       kindnet-xp46p                               kube-system
	0903674c0ff17       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   2361cf90a13f8       kube-apiserver-newest-cni-438041            kube-system
	5dcec9dda2453       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   31fc8d0f26d1e       kube-controller-manager-newest-cni-438041   kube-system
	453c0dc25dde5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   8a21d9f9217f3       etcd-newest-cni-438041                      kube-system
	a629768f55496       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   86d1f5bca10ab       kube-scheduler-newest-cni-438041            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-438041
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-438041
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=newest-cni-438041
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_11_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:11:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-438041
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:11:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:11:58 +0000   Mon, 24 Nov 2025 03:11:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:11:58 +0000   Mon, 24 Nov 2025 03:11:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:11:58 +0000   Mon, 24 Nov 2025 03:11:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 03:11:58 +0000   Mon, 24 Nov 2025 03:11:18 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-438041
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                6b4f4c50-807c-4c82-a9aa-10eb04614b7a
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-438041                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         41s
	  kube-system                 kindnet-xp46p                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      36s
	  kube-system                 kube-apiserver-newest-cni-438041             250m (3%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-newest-cni-438041    200m (2%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-proxy-n85pg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-scheduler-newest-cni-438041             100m (1%)     0 (0%)      0 (0%)           0 (0%)         41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 34s   kube-proxy       
	  Normal  Starting                 4s    kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node newest-cni-438041 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node newest-cni-438041 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node newest-cni-438041 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           36s   node-controller  Node newest-cni-438041 event: Registered Node newest-cni-438041 in Controller
	  Normal  RegisteredNode           2s    node-controller  Node newest-cni-438041 event: Registered Node newest-cni-438041 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [453c0dc25dde51ccdc58f6043d75d117dc72d3b347ea5068c17db0082002c0ad] <==
	{"level":"warn","ts":"2025-11-24T03:11:57.739172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.744954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.754282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.762119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.769570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.778424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.786039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.805276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.809138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.823265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.837245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.844772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.850649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.856566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.863385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.870863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.877047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.882850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.889601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.895844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.903801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.918557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.927434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.935738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:58.008527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43236","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:12:03 up  1:54,  0 user,  load average: 3.95, 3.87, 2.53
	Linux newest-cni-438041 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ca8f3e49d19f2e01029131d694cb2f3366da510e6fc4954a37c6be9f877ca0a4] <==
	I1124 03:11:59.364644       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:11:59.364984       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 03:11:59.365144       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:11:59.365159       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:11:59.365179       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:11:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:11:59.649482       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:11:59.649517       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:11:59.649533       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:11:59.650121       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:11:59.949816       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:11:59.949849       1 metrics.go:72] Registering metrics
	I1124 03:11:59.949928       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [0903674c0ff17f5f88d257aea9b1e2cf56ff9103105cdbeb4e86732b145c0bef] <==
	I1124 03:11:58.520225       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 03:11:58.522634       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 03:11:58.523186       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 03:11:58.523356       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 03:11:58.523412       1 aggregator.go:171] initial CRD sync complete...
	I1124 03:11:58.523431       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 03:11:58.523437       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 03:11:58.523444       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:11:58.523606       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 03:11:58.523659       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:11:58.528867       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 03:11:58.532581       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 03:11:58.544396       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:11:58.835348       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:11:58.869049       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:11:58.899830       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:11:58.921311       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:11:58.928584       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:11:58.976701       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.138.81"}
	I1124 03:11:58.987738       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.237.249"}
	I1124 03:11:59.414454       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:12:02.230862       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:12:02.281400       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:12:02.330655       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:12:02.381285       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5dcec9dda2453f45f4516eff019d2077d2052e95c11d896705f53b3ac53c11a9] <==
	I1124 03:12:01.786270       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 03:12:01.788477       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 03:12:01.795388       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:12:01.798651       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:12:01.801826       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:12:01.828538       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:12:01.828587       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:12:01.828598       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:12:01.828606       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:12:01.828622       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:12:01.828651       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 03:12:01.828669       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 03:12:01.828720       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:12:01.828842       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:12:01.828963       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 03:12:01.829313       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 03:12:01.831284       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:12:01.833452       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:12:01.833488       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:12:01.834587       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 03:12:01.834627       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 03:12:01.834679       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 03:12:01.834687       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 03:12:01.834691       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 03:12:01.849833       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e62bf0a89aa63020c936ad6a03e6c0480301613e4cd0a960cd58ad2ac3fe719f] <==
	I1124 03:11:59.212239       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:11:59.279078       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:11:59.379718       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:11:59.379761       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 03:11:59.379963       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:11:59.397342       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:11:59.397399       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:11:59.402822       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:11:59.403205       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:11:59.403226       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:11:59.404460       1 config.go:200] "Starting service config controller"
	I1124 03:11:59.404486       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:11:59.404564       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:11:59.404586       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:11:59.404739       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:11:59.404761       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:11:59.404809       1 config.go:309] "Starting node config controller"
	I1124 03:11:59.404814       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:11:59.404818       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:11:59.504577       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:11:59.505732       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:11:59.505785       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a629768f55496c2969d757c473189f52d99ddea90e0a365150097df5fe2ec9e2] <==
	I1124 03:11:57.020490       1 serving.go:386] Generated self-signed cert in-memory
	W1124 03:11:58.466869       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 03:11:58.466956       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 03:11:58.466969       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 03:11:58.466988       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 03:11:58.507144       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 03:11:58.507199       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:11:58.510544       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:11:58.510587       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:11:58.511577       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 03:11:58.511843       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:11:58.611657       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:11:57 newest-cni-438041 kubelet[674]: E1124 03:11:57.886686     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-438041\" not found" node="newest-cni-438041"
	Nov 24 03:11:57 newest-cni-438041 kubelet[674]: E1124 03:11:57.886829     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-438041\" not found" node="newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.438088     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.543727     674 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.545006     674 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.545052     674 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.545962     674 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: E1124 03:11:58.557289     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-438041\" already exists" pod="kube-system/kube-apiserver-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.557331     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: E1124 03:11:58.565301     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-438041\" already exists" pod="kube-system/kube-controller-manager-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.565336     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: E1124 03:11:58.573007     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-438041\" already exists" pod="kube-system/kube-scheduler-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.573035     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: E1124 03:11:58.581692     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-438041\" already exists" pod="kube-system/etcd-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.812209     674 apiserver.go:52] "Watching apiserver"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.829949     674 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.832023     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19fa7668-24bd-454c-a5df-37534a06d3a5-xtables-lock\") pod \"kindnet-xp46p\" (UID: \"19fa7668-24bd-454c-a5df-37534a06d3a5\") " pod="kube-system/kindnet-xp46p"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.832116     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86f875e2-7efc-4b60-b031-a1de71ea7502-lib-modules\") pod \"kube-proxy-n85pg\" (UID: \"86f875e2-7efc-4b60-b031-a1de71ea7502\") " pod="kube-system/kube-proxy-n85pg"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.832709     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19fa7668-24bd-454c-a5df-37534a06d3a5-lib-modules\") pod \"kindnet-xp46p\" (UID: \"19fa7668-24bd-454c-a5df-37534a06d3a5\") " pod="kube-system/kindnet-xp46p"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.832747     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86f875e2-7efc-4b60-b031-a1de71ea7502-xtables-lock\") pod \"kube-proxy-n85pg\" (UID: \"86f875e2-7efc-4b60-b031-a1de71ea7502\") " pod="kube-system/kube-proxy-n85pg"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.832783     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/19fa7668-24bd-454c-a5df-37534a06d3a5-cni-cfg\") pod \"kindnet-xp46p\" (UID: \"19fa7668-24bd-454c-a5df-37534a06d3a5\") " pod="kube-system/kindnet-xp46p"
	Nov 24 03:12:00 newest-cni-438041 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 03:12:00 newest-cni-438041 kubelet[674]: I1124 03:12:00.689240     674 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 24 03:12:00 newest-cni-438041 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 03:12:00 newest-cni-438041 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-438041 -n newest-cni-438041
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-438041 -n newest-cni-438041: exit status 2 (322.428653ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-438041 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-b5rlp coredns-66bc5c9577-mwvq8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lgwxm kubernetes-dashboard-855c9754f9-4l8m4
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-438041 describe pod coredns-66bc5c9577-b5rlp coredns-66bc5c9577-mwvq8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lgwxm kubernetes-dashboard-855c9754f9-4l8m4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-438041 describe pod coredns-66bc5c9577-b5rlp coredns-66bc5c9577-mwvq8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lgwxm kubernetes-dashboard-855c9754f9-4l8m4: exit status 1 (61.219628ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-b5rlp" not found
	Error from server (NotFound): pods "coredns-66bc5c9577-mwvq8" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-lgwxm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-4l8m4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-438041 describe pod coredns-66bc5c9577-b5rlp coredns-66bc5c9577-mwvq8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lgwxm kubernetes-dashboard-855c9754f9-4l8m4: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-438041
helpers_test.go:243: (dbg) docker inspect newest-cni-438041:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64",
	        "Created": "2025-11-24T03:11:03.961758173Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 651783,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:11:49.529161449Z",
	            "FinishedAt": "2025-11-24T03:11:48.62049328Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64/hostname",
	        "HostsPath": "/var/lib/docker/containers/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64/hosts",
	        "LogPath": "/var/lib/docker/containers/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64/7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64-json.log",
	        "Name": "/newest-cni-438041",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-438041:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-438041",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7dcb0e0e285e23735bd72a2f907c60f0b6edbf67c52b4bd116a892efe43aed64",
	                "LowerDir": "/var/lib/docker/overlay2/0710d621fdc7311911f475a554b8f76b82b86a9db0b0e85e3045f0e2074e3cc1-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0710d621fdc7311911f475a554b8f76b82b86a9db0b0e85e3045f0e2074e3cc1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0710d621fdc7311911f475a554b8f76b82b86a9db0b0e85e3045f0e2074e3cc1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0710d621fdc7311911f475a554b8f76b82b86a9db0b0e85e3045f0e2074e3cc1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-438041",
	                "Source": "/var/lib/docker/volumes/newest-cni-438041/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-438041",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-438041",
	                "name.minikube.sigs.k8s.io": "newest-cni-438041",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "247d5e804118e132e665508e9478d80a036c20ab09eab525bfdf6959cd6a6736",
	            "SandboxKey": "/var/run/docker/netns/247d5e804118",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-438041": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b30d540ef88b055a6ad3cc188fd27395739f217150ea48ac734e123a015ff9c1",
	                    "EndpointID": "d258b0a220bf40ae620b50ee478407904848ef9f87b20ac197bcfb380c12a70e",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "06:25:0b:77:12:40",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-438041",
	                        "7dcb0e0e285e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438041 -n newest-cni-438041
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438041 -n newest-cni-438041: exit status 2 (334.502125ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-438041 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p flannel-965704 sudo systemctl cat containerd --no-pager                                                                                                                                                                                    │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                             │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo cat /etc/containerd/config.toml                                                                                                                                                                                        │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo containerd config dump                                                                                                                                                                                                 │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                          │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo systemctl cat crio --no-pager                                                                                                                                                                                          │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ ssh     │ -p flannel-965704 sudo crio config                                                                                                                                                                                                            │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ delete  │ -p flannel-965704                                                                                                                                                                                                                             │ flannel-965704               │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:10 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:11 UTC │
	│ addons  │ enable metrics-server -p newest-cni-438041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-579951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p newest-cni-438041 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ stop    │ -p old-k8s-version-579951 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-603010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993813 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ stop    │ -p no-preload-603010 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable dashboard -p old-k8s-version-579951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p old-k8s-version-579951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable dashboard -p newest-cni-438041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ image   │ newest-cni-438041 image list --format=json                                                                                                                                                                                                    │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p newest-cni-438041 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:11:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:11:49.286034  651375 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:11:49.286341  651375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:11:49.286354  651375 out.go:374] Setting ErrFile to fd 2...
	I1124 03:11:49.286361  651375 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:11:49.286680  651375 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:11:49.287212  651375 out.go:368] Setting JSON to false
	I1124 03:11:49.288472  651375 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6856,"bootTime":1763947053,"procs":278,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:11:49.288536  651375 start.go:143] virtualization: kvm guest
	I1124 03:11:49.290525  651375 out.go:179] * [newest-cni-438041] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:11:49.291632  651375 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:11:49.291682  651375 notify.go:221] Checking for updates...
	I1124 03:11:49.293520  651375 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:11:49.295168  651375 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:49.296316  651375 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:11:49.297301  651375 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:11:49.298347  651375 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:11:49.299864  651375 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:49.300465  651375 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:11:49.326011  651375 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:11:49.326117  651375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:11:49.389197  651375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-24 03:11:49.379237015 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:11:49.389384  651375 docker.go:319] overlay module found
	I1124 03:11:49.391058  651375 out.go:179] * Using the docker driver based on existing profile
	I1124 03:11:49.394787  651375 start.go:309] selected driver: docker
	I1124 03:11:49.394805  651375 start.go:927] validating driver "docker" against &{Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:49.394909  651375 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:11:49.395601  651375 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:11:49.456410  651375 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:75 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-24 03:11:49.446909447 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:11:49.456803  651375 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 03:11:49.456862  651375 cni.go:84] Creating CNI manager for ""
	I1124 03:11:49.456959  651375 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:49.457023  651375 start.go:353] cluster config:
	{Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:49.459312  651375 out.go:179] * Starting "newest-cni-438041" primary control-plane node in "newest-cni-438041" cluster
	I1124 03:11:49.460297  651375 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:11:49.461390  651375 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:11:49.462323  651375 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:11:49.462354  651375 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:11:49.462369  651375 cache.go:65] Caching tarball of preloaded images
	I1124 03:11:49.462426  651375 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:11:49.462483  651375 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:11:49.462500  651375 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:11:49.462642  651375 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json ...
	I1124 03:11:49.483910  651375 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:11:49.483934  651375 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:11:49.483955  651375 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:11:49.484000  651375 start.go:360] acquireMachinesLock for newest-cni-438041: {Name:mk895e89056f5ce7564002ba75457dcfde41ce4c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:11:49.484055  651375 start.go:364] duration metric: took 37.469µs to acquireMachinesLock for "newest-cni-438041"
	I1124 03:11:49.484071  651375 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:11:49.484076  651375 fix.go:54] fixHost starting: 
	I1124 03:11:49.484279  651375 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:49.501819  651375 fix.go:112] recreateIfNeeded on newest-cni-438041: state=Stopped err=<nil>
	W1124 03:11:49.501854  651375 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:11:48.543265  650744 out.go:252] * Restarting existing docker container for "old-k8s-version-579951" ...
	I1124 03:11:48.543327  650744 cli_runner.go:164] Run: docker start old-k8s-version-579951
	I1124 03:11:49.018869  650744 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:49.037977  650744 kic.go:430] container "old-k8s-version-579951" state is running.
	I1124 03:11:49.038445  650744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-579951
	I1124 03:11:49.058457  650744 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/config.json ...
	I1124 03:11:49.058699  650744 machine.go:94] provisionDockerMachine start ...
	I1124 03:11:49.058779  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:49.079127  650744 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:49.079531  650744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1124 03:11:49.079548  650744 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:11:49.080281  650744 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35748->127.0.0.1:33478: read: connection reset by peer
	I1124 03:11:52.216396  650744 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-579951
	
	I1124 03:11:52.216428  650744 ubuntu.go:182] provisioning hostname "old-k8s-version-579951"
	I1124 03:11:52.216485  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:52.234578  650744 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:52.234796  650744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1124 03:11:52.234808  650744 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-579951 && echo "old-k8s-version-579951" | sudo tee /etc/hostname
	I1124 03:11:52.379425  650744 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-579951
	
	I1124 03:11:52.379486  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:52.396899  650744 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:52.397126  650744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1124 03:11:52.397151  650744 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-579951' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-579951/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-579951' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:11:52.532664  650744 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:11:52.532691  650744 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:11:52.532712  650744 ubuntu.go:190] setting up certificates
	I1124 03:11:52.532732  650744 provision.go:84] configureAuth start
	I1124 03:11:52.532805  650744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-579951
	I1124 03:11:52.549693  650744 provision.go:143] copyHostCerts
	I1124 03:11:52.549771  650744 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:11:52.549790  650744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:11:52.549866  650744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:11:52.550013  650744 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:11:52.550027  650744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:11:52.550070  650744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:11:52.550178  650744 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:11:52.550190  650744 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:11:52.550229  650744 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:11:52.550328  650744 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-579951 san=[127.0.0.1 192.168.103.2 localhost minikube old-k8s-version-579951]
	I1124 03:11:52.615306  650744 provision.go:177] copyRemoteCerts
	I1124 03:11:52.615355  650744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:11:52.615407  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:52.632115  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:52.729274  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 03:11:52.745792  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:11:52.762196  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:11:52.778250  650744 provision.go:87] duration metric: took 245.489826ms to configureAuth
	I1124 03:11:52.778274  650744 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:11:52.778434  650744 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:11:52.778558  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:52.795721  650744 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:52.795982  650744 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33478 <nil> <nil>}
	I1124 03:11:52.796014  650744 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:11:53.108304  650744 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:11:53.108332  650744 machine.go:97] duration metric: took 4.049613812s to provisionDockerMachine
	I1124 03:11:53.108346  650744 start.go:293] postStartSetup for "old-k8s-version-579951" (driver="docker")
	I1124 03:11:53.108359  650744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:11:53.108417  650744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:11:53.108463  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:53.128841  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:53.228947  650744 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:11:53.232304  650744 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:11:53.232329  650744 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:11:53.232339  650744 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:11:53.232380  650744 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:11:53.232489  650744 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:11:53.232630  650744 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:11:53.239802  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:53.257964  650744 start.go:296] duration metric: took 149.602953ms for postStartSetup
	I1124 03:11:53.258035  650744 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:11:53.258075  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:53.276087  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:49.503558  651375 out.go:252] * Restarting existing docker container for "newest-cni-438041" ...
	I1124 03:11:49.503632  651375 cli_runner.go:164] Run: docker start newest-cni-438041
	I1124 03:11:49.775057  651375 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:49.792897  651375 kic.go:430] container "newest-cni-438041" state is running.
	I1124 03:11:49.793249  651375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:49.810414  651375 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/config.json ...
	I1124 03:11:49.810621  651375 machine.go:94] provisionDockerMachine start ...
	I1124 03:11:49.810704  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:49.827728  651375 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:49.828028  651375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1124 03:11:49.828047  651375 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:11:49.828727  651375 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54342->127.0.0.1:33483: read: connection reset by peer
	I1124 03:11:52.965403  651375 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-438041
	
	I1124 03:11:52.965440  651375 ubuntu.go:182] provisioning hostname "newest-cni-438041"
	I1124 03:11:52.965504  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:52.984685  651375 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:52.985019  651375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1124 03:11:52.985038  651375 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-438041 && echo "newest-cni-438041" | sudo tee /etc/hostname
	I1124 03:11:53.137644  651375 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-438041
	
	I1124 03:11:53.137724  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:53.155256  651375 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:53.155466  651375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1124 03:11:53.155486  651375 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-438041' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-438041/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-438041' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:11:53.293217  651375 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:11:53.293258  651375 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:11:53.293286  651375 ubuntu.go:190] setting up certificates
	I1124 03:11:53.293297  651375 provision.go:84] configureAuth start
	I1124 03:11:53.293346  651375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:53.311194  651375 provision.go:143] copyHostCerts
	I1124 03:11:53.311248  651375 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:11:53.311265  651375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:11:53.311322  651375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:11:53.311414  651375 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:11:53.311423  651375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:11:53.311449  651375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:11:53.311514  651375 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:11:53.311528  651375 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:11:53.311551  651375 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:11:53.311617  651375 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.newest-cni-438041 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-438041]
	I1124 03:11:53.385091  651375 provision.go:177] copyRemoteCerts
	I1124 03:11:53.385135  651375 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:11:53.385171  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:53.403988  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:53.504556  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:11:53.521266  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:11:53.537701  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:11:53.554294  651375 provision.go:87] duration metric: took 260.985811ms to configureAuth
	I1124 03:11:53.554314  651375 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:11:53.554467  651375 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:53.554556  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:53.573458  651375 main.go:143] libmachine: Using SSH client type: native
	I1124 03:11:53.573772  651375 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33483 <nil> <nil>}
	I1124 03:11:53.573798  651375 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:11:53.876636  651375 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:11:53.876661  651375 machine.go:97] duration metric: took 4.066025512s to provisionDockerMachine
	I1124 03:11:53.876676  651375 start.go:293] postStartSetup for "newest-cni-438041" (driver="docker")
	I1124 03:11:53.876690  651375 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:11:53.876748  651375 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:11:53.876833  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:53.901478  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:54.002635  651375 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:11:54.006142  651375 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:11:54.006170  651375 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:11:54.006184  651375 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:11:54.006247  651375 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:11:54.006316  651375 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:11:54.006400  651375 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:11:54.014092  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:54.031386  651375 start.go:296] duration metric: took 154.69471ms for postStartSetup
	I1124 03:11:54.031487  651375 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:11:54.031538  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:54.055932  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:54.155460  651375 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:11:54.160082  651375 fix.go:56] duration metric: took 4.676000022s for fixHost
	I1124 03:11:54.160106  651375 start.go:83] releasing machines lock for "newest-cni-438041", held for 4.676040415s
	I1124 03:11:54.160165  651375 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-438041
	I1124 03:11:54.177706  651375 ssh_runner.go:195] Run: cat /version.json
	I1124 03:11:54.177749  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:54.177800  651375 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:11:54.177875  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:54.201769  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:54.202803  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:53.370290  650744 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:11:53.374610  650744 fix.go:56] duration metric: took 4.849001876s for fixHost
	I1124 03:11:53.374635  650744 start.go:83] releasing machines lock for "old-k8s-version-579951", held for 4.849045163s
	I1124 03:11:53.374701  650744 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-579951
	I1124 03:11:53.391284  650744 ssh_runner.go:195] Run: cat /version.json
	I1124 03:11:53.391357  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:53.391387  650744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:11:53.391455  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:53.410218  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:53.410760  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:53.559012  650744 ssh_runner.go:195] Run: systemctl --version
	I1124 03:11:53.565240  650744 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:11:53.599725  650744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:11:53.604385  650744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:11:53.604445  650744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:11:53.612174  650744 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:11:53.612203  650744 start.go:496] detecting cgroup driver to use...
	I1124 03:11:53.612238  650744 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:11:53.612278  650744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:11:53.626624  650744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:11:53.638422  650744 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:11:53.638469  650744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:11:53.651771  650744 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:11:53.663312  650744 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:11:53.742876  650744 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:11:53.829646  650744 docker.go:234] disabling docker service ...
	I1124 03:11:53.829715  650744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:11:53.843791  650744 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:11:53.855910  650744 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:11:53.942002  650744 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:11:54.032167  650744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:11:54.050243  650744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:11:54.067047  650744 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1124 03:11:54.067113  650744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.077670  650744 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:11:54.077746  650744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.087852  650744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.097146  650744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.106088  650744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:11:54.115186  650744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.123913  650744 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.132185  650744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.140989  650744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:11:54.148242  650744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:11:54.155386  650744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:54.241274  650744 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:11:54.374173  650744 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:11:54.374240  650744 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:11:54.378159  650744 start.go:564] Will wait 60s for crictl version
	I1124 03:11:54.378214  650744 ssh_runner.go:195] Run: which crictl
	I1124 03:11:54.382186  650744 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:11:54.407341  650744 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:11:54.407411  650744 ssh_runner.go:195] Run: crio --version
	I1124 03:11:54.435584  650744 ssh_runner.go:195] Run: crio --version
	I1124 03:11:54.466574  650744 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1124 03:11:54.354645  651375 ssh_runner.go:195] Run: systemctl --version
	I1124 03:11:54.361151  651375 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:11:54.396538  651375 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:11:54.401291  651375 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:11:54.401363  651375 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:11:54.410052  651375 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:11:54.410074  651375 start.go:496] detecting cgroup driver to use...
	I1124 03:11:54.410102  651375 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:11:54.410175  651375 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:11:54.424400  651375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:11:54.436581  651375 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:11:54.436641  651375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:11:54.451281  651375 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:11:54.464491  651375 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:11:54.544542  651375 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:11:54.636859  651375 docker.go:234] disabling docker service ...
	I1124 03:11:54.636935  651375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:11:54.650564  651375 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:11:54.662584  651375 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:11:54.750225  651375 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:11:54.837494  651375 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:11:54.851435  651375 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:11:54.868586  651375 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:11:54.868644  651375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.879945  651375 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:11:54.880011  651375 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.891096  651375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.899537  651375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.907696  651375 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:11:54.915645  651375 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.924063  651375 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.932333  651375 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:11:54.940494  651375 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:11:54.947799  651375 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:11:54.955005  651375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:55.036032  651375 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:11:55.166161  651375 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:11:55.166223  651375 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:11:55.169942  651375 start.go:564] Will wait 60s for crictl version
	I1124 03:11:55.169994  651375 ssh_runner.go:195] Run: which crictl
	I1124 03:11:55.174111  651375 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:11:55.201399  651375 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:11:55.201458  651375 ssh_runner.go:195] Run: crio --version
	I1124 03:11:55.229114  651375 ssh_runner.go:195] Run: crio --version
	I1124 03:11:55.263353  651375 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:11:55.264458  651375 cli_runner.go:164] Run: docker network inspect newest-cni-438041 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:11:55.283878  651375 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:11:55.287727  651375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:55.300854  651375 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 03:11:54.467605  650744 cli_runner.go:164] Run: docker network inspect old-k8s-version-579951 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:11:54.484445  650744 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 03:11:54.488309  650744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:54.501278  650744 kubeadm.go:884] updating cluster {Name:old-k8s-version-579951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-579951 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mount
UID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:11:54.501423  650744 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 03:11:54.501496  650744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:54.531372  650744 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:54.531395  650744 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:11:54.531441  650744 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:54.555715  650744 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:54.555734  650744 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:11:54.555741  650744 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.28.0 crio true true} ...
	I1124 03:11:54.555839  650744 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-579951 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-579951 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:54.555930  650744 ssh_runner.go:195] Run: crio config
	I1124 03:11:54.617621  650744 cni.go:84] Creating CNI manager for ""
	I1124 03:11:54.617645  650744 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:54.617665  650744 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:11:54.617695  650744 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-579951 NodeName:old-k8s-version-579951 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:54.617850  650744 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-579951"
	  kubeletExtraArgs:
	    node-ip: 192.168.103.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:54.617945  650744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 03:11:54.625707  650744 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:11:54.625771  650744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:54.633259  650744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I1124 03:11:54.645713  650744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:54.657930  650744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I1124 03:11:54.670120  650744 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:54.673421  650744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:54.682551  650744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:54.764746  650744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:54.787574  650744 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951 for IP: 192.168.103.2
	I1124 03:11:54.787596  650744 certs.go:195] generating shared ca certs ...
	I1124 03:11:54.787617  650744 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:54.787796  650744 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:54.787857  650744 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:54.787999  650744 certs.go:257] generating profile certs ...
	I1124 03:11:54.788183  650744 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/client.key
	I1124 03:11:54.788276  650744 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/apiserver.key.e6745a5b
	I1124 03:11:54.788326  650744 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/proxy-client.key
	I1124 03:11:54.788469  650744 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:54.788513  650744 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:54.788527  650744 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:54.788569  650744 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:54.788606  650744 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:54.788636  650744 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:54.788693  650744 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:54.789558  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:54.808835  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:54.827231  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:54.845706  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:54.868181  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 03:11:54.890468  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:11:54.907700  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:54.924797  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/old-k8s-version-579951/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:11:54.941639  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:54.958656  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:54.975700  650744 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:54.997087  650744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:55.008914  650744 ssh_runner.go:195] Run: openssl version
	I1124 03:11:55.014559  650744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:55.022283  650744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:55.025806  650744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:55.025869  650744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:55.063352  650744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:55.071428  650744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:55.080766  650744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:55.084543  650744 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:55.084588  650744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:55.120661  650744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:55.128811  650744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:55.137242  650744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:55.140743  650744 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:55.140779  650744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:55.177906  650744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:55.186832  650744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:55.190616  650744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:11:55.227571  650744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:11:55.275193  650744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:11:55.323462  650744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:11:55.367108  650744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:11:55.425531  650744 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:11:55.488993  650744 kubeadm.go:401] StartCluster: {Name:old-k8s-version-579951 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-579951 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:55.489121  650744 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:55.489205  650744 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:55.528987  650744 cri.go:89] found id: "cc8b5ee4851c9ae1241dd77995f3d1a2e725abb08136f47c106f5adf7f25f2a7"
	I1124 03:11:55.529073  650744 cri.go:89] found id: "3176f2d8220eaa411e72fa77d582041c78e4d0b8acbd739cd01992ec3cfa7230"
	I1124 03:11:55.529090  650744 cri.go:89] found id: "30d22d684ad7501e38080ff45bbe87f71a21252754ba692fc20125e3845f807a"
	I1124 03:11:55.529115  650744 cri.go:89] found id: "3356da3bf9c8232ed305911fa37644fd0513640f4477238b1a7e39b8e438c2a0"
	I1124 03:11:55.529119  650744 cri.go:89] found id: ""
	I1124 03:11:55.529165  650744 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:11:55.543702  650744 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:11:55Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:11:55.543919  650744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:55.555836  650744 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:11:55.556700  650744 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:11:55.556774  650744 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:11:55.566242  650744 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:11:55.567163  650744 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-579951" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:55.567663  650744 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-579951" cluster setting kubeconfig missing "old-k8s-version-579951" context setting]
	I1124 03:11:55.568451  650744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:55.570408  650744 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:11:55.584022  650744 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1124 03:11:55.584094  650744 kubeadm.go:602] duration metric: took 27.380465ms to restartPrimaryControlPlane
	I1124 03:11:55.584116  650744 kubeadm.go:403] duration metric: took 95.136682ms to StartCluster
	I1124 03:11:55.584143  650744 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:55.584226  650744 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:55.585768  650744 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:55.585994  650744 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:55.586185  650744 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:55.586292  650744 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-579951"
	I1124 03:11:55.586318  650744 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-579951"
	W1124 03:11:55.586329  650744 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:11:55.586360  650744 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:11:55.586872  650744 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:55.587179  650744 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:11:55.587199  650744 addons.go:70] Setting dashboard=true in profile "old-k8s-version-579951"
	I1124 03:11:55.587372  650744 addons.go:239] Setting addon dashboard=true in "old-k8s-version-579951"
	W1124 03:11:55.587382  650744 addons.go:248] addon dashboard should already be in state true
	I1124 03:11:55.587406  650744 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:11:55.587869  650744 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:55.587220  650744 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-579951"
	I1124 03:11:55.588633  650744 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-579951"
	I1124 03:11:55.588966  650744 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:55.590288  650744 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:55.595231  650744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:55.621484  650744 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-579951"
	W1124 03:11:55.621610  650744 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:11:55.621688  650744 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:11:55.622417  650744 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:11:55.626837  650744 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:11:55.626840  650744 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:55.628128  650744 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:55.628176  650744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:55.628248  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:55.628181  650744 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:11:55.302209  651375 kubeadm.go:884] updating cluster {Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:11:55.302390  651375 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:11:55.302458  651375 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:55.338853  651375 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:55.338878  651375 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:11:55.338954  651375 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:11:55.366076  651375 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:11:55.366102  651375 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:11:55.366109  651375 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:11:55.366220  651375 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-438041 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:11:55.366303  651375 ssh_runner.go:195] Run: crio config
	I1124 03:11:55.440125  651375 cni.go:84] Creating CNI manager for ""
	I1124 03:11:55.440166  651375 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:11:55.440190  651375 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 03:11:55.440304  651375 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-438041 NodeName:newest-cni-438041 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:11:55.440585  651375 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-438041"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:11:55.440735  651375 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:11:55.453216  651375 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:11:55.453431  651375 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:11:55.467527  651375 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:11:55.487696  651375 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:11:55.504184  651375 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1124 03:11:55.518982  651375 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:11:55.524390  651375 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:11:55.537713  651375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:55.688666  651375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:55.719136  651375 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041 for IP: 192.168.94.2
	I1124 03:11:55.719216  651375 certs.go:195] generating shared ca certs ...
	I1124 03:11:55.719253  651375 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:55.719452  651375 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:11:55.719506  651375 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:11:55.719515  651375 certs.go:257] generating profile certs ...
	I1124 03:11:55.719641  651375 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/client.key
	I1124 03:11:55.719706  651375 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key.52539183
	I1124 03:11:55.719758  651375 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key
	I1124 03:11:55.719908  651375 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:11:55.719950  651375 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:11:55.719960  651375 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:11:55.719994  651375 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:11:55.720030  651375 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:11:55.720059  651375 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:11:55.720122  651375 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:11:55.728513  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:11:55.753214  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:11:55.773936  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:11:55.797324  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:11:55.826138  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:11:55.864774  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:11:55.890504  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:11:55.915534  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/newest-cni-438041/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:11:55.936867  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:11:55.957784  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:11:55.977419  651375 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:11:55.996059  651375 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:11:56.009161  651375 ssh_runner.go:195] Run: openssl version
	I1124 03:11:56.015044  651375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:11:56.023454  651375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:11:56.027144  651375 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:11:56.027196  651375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:11:56.064292  651375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:11:56.072189  651375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:11:56.080235  651375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:56.083782  651375 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:56.083827  651375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:11:56.116705  651375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:11:56.124082  651375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:11:56.132005  651375 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:11:56.135614  651375 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:11:56.135659  651375 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:11:56.169786  651375 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:11:56.177431  651375 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:11:56.181071  651375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:11:56.214240  651375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:11:56.247819  651375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:11:56.282593  651375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:11:56.330465  651375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:11:56.375382  651375 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:11:56.429850  651375 kubeadm.go:401] StartCluster: {Name:newest-cni-438041 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-438041 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:11:56.429998  651375 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:11:56.430062  651375 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:11:56.463304  651375 cri.go:89] found id: "0903674c0ff17f5f88d257aea9b1e2cf56ff9103105cdbeb4e86732b145c0bef"
	I1124 03:11:56.463327  651375 cri.go:89] found id: "5dcec9dda2453f45f4516eff019d2077d2052e95c11d896705f53b3ac53c11a9"
	I1124 03:11:56.463333  651375 cri.go:89] found id: "453c0dc25dde51ccdc58f6043d75d117dc72d3b347ea5068c17db0082002c0ad"
	I1124 03:11:56.463337  651375 cri.go:89] found id: "a629768f55496c2969d757c473189f52d99ddea90e0a365150097df5fe2ec9e2"
	I1124 03:11:56.463341  651375 cri.go:89] found id: ""
	I1124 03:11:56.463386  651375 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:11:56.475440  651375 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:11:56Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:11:56.475515  651375 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:11:56.484103  651375 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:11:56.484119  651375 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:11:56.484167  651375 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:11:56.491423  651375 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:11:56.492180  651375 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-438041" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:56.492651  651375 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-438041" cluster setting kubeconfig missing "newest-cni-438041" context setting]
	I1124 03:11:56.493413  651375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:56.494954  651375 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:11:56.502146  651375 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1124 03:11:56.502174  651375 kubeadm.go:602] duration metric: took 18.049259ms to restartPrimaryControlPlane
	I1124 03:11:56.502183  651375 kubeadm.go:403] duration metric: took 72.346726ms to StartCluster
	I1124 03:11:56.502198  651375 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:56.502266  651375 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:11:56.503449  651375 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:11:56.503644  651375 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:11:56.503802  651375 config.go:182] Loaded profile config "newest-cni-438041": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:11:56.503842  651375 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:11:56.503924  651375 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-438041"
	I1124 03:11:56.503941  651375 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-438041"
	W1124 03:11:56.503952  651375 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:11:56.503979  651375 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:11:56.504019  651375 addons.go:70] Setting dashboard=true in profile "newest-cni-438041"
	I1124 03:11:56.504063  651375 addons.go:239] Setting addon dashboard=true in "newest-cni-438041"
	W1124 03:11:56.504077  651375 addons.go:248] addon dashboard should already be in state true
	I1124 03:11:56.504113  651375 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:11:56.504102  651375 addons.go:70] Setting default-storageclass=true in profile "newest-cni-438041"
	I1124 03:11:56.504146  651375 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-438041"
	I1124 03:11:56.504460  651375 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:56.504487  651375 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:56.504613  651375 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:56.505883  651375 out.go:179] * Verifying Kubernetes components...
	I1124 03:11:56.507118  651375 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:11:56.528676  651375 addons.go:239] Setting addon default-storageclass=true in "newest-cni-438041"
	W1124 03:11:56.528700  651375 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:11:56.528729  651375 host.go:66] Checking if "newest-cni-438041" exists ...
	I1124 03:11:56.529015  651375 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:11:56.529015  651375 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:11:56.529231  651375 cli_runner.go:164] Run: docker container inspect newest-cni-438041 --format={{.State.Status}}
	I1124 03:11:56.530408  651375 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:56.530428  651375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:11:56.530484  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:56.531428  651375 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:11:55.629296  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:11:55.629364  650744 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:11:55.629448  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:55.653096  650744 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:55.653124  650744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:55.653185  650744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:11:55.659046  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:55.673014  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:55.694050  650744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:11:55.777052  650744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:55.794835  650744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:55.797462  650744 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-579951" to be "Ready" ...
	I1124 03:11:55.801170  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:11:55.801186  650744 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:11:55.818027  650744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:55.823370  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:11:55.823389  650744 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:11:55.857809  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:11:55.857837  650744 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:11:55.882470  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:11:55.882491  650744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:11:55.902562  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:11:55.902586  650744 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:11:55.921007  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:11:55.921028  650744 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:11:55.937245  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:11:55.937265  650744 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:11:55.951635  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:11:55.951656  650744 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:11:55.965030  650744 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:11:55.965053  650744 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:11:55.980361  650744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:11:57.922223  650744 node_ready.go:49] node "old-k8s-version-579951" is "Ready"
	I1124 03:11:57.922259  650744 node_ready.go:38] duration metric: took 2.124761171s for node "old-k8s-version-579951" to be "Ready" ...
	I1124 03:11:57.922276  650744 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:57.922328  650744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:11:56.532483  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:11:56.532503  651375 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:11:56.532557  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:56.558942  651375 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:56.559099  651375 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:11:56.559488  651375 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-438041
	I1124 03:11:56.563483  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:56.565335  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:56.581714  651375 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/newest-cni-438041/id_rsa Username:docker}
	I1124 03:11:56.649131  651375 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:11:56.661798  651375 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:11:56.661950  651375 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:11:56.675962  651375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:11:56.676028  651375 api_server.go:72] duration metric: took 172.356334ms to wait for apiserver process to appear ...
	I1124 03:11:56.676041  651375 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:56.676058  651375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:11:56.678809  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:11:56.678827  651375 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:11:56.694241  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:11:56.694259  651375 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:11:56.695582  651375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:11:56.709532  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:11:56.709553  651375 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:11:56.725361  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:11:56.725379  651375 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:11:56.742031  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:11:56.742053  651375 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:11:56.757398  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:11:56.757419  651375 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:11:56.772359  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:11:56.772376  651375 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:11:56.784906  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:11:56.784923  651375 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:11:56.800657  651375 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:11:56.800676  651375 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:11:56.816333  651375 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:11:58.429662  651375 api_server.go:279] https://192.168.94.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1124 03:11:58.429696  651375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1124 03:11:58.429715  651375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:11:58.506656  651375 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:11:58.506689  651375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/start-kubernetes-service-cidr-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:11:58.676122  651375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:11:58.682095  651375 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:11:58.682121  651375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:11:59.063524  651375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.387529341s)
	I1124 03:11:59.063633  651375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.368021884s)
	I1124 03:11:59.063735  651375 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.247358354s)
	I1124 03:11:59.065554  651375 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-438041 addons enable metrics-server
	
	I1124 03:11:59.077814  651375 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 03:11:58.767994  650744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.97312073s)
	I1124 03:11:58.767996  650744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.949932356s)
	I1124 03:11:59.119301  650744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.138860206s)
	I1124 03:11:59.119326  650744 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.196967726s)
	I1124 03:11:59.119356  650744 api_server.go:72] duration metric: took 3.533325717s to wait for apiserver process to appear ...
	I1124 03:11:59.119366  650744 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:11:59.119389  650744 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 03:11:59.123936  650744 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-579951 addons enable metrics-server
	
	I1124 03:11:59.124802  650744 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 03:11:59.125919  650744 api_server.go:141] control plane version: v1.28.0
	I1124 03:11:59.125946  650744 api_server.go:131] duration metric: took 6.571904ms to wait for apiserver health ...
	I1124 03:11:59.125957  650744 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:59.128338  650744 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 03:11:59.078988  651375 addons.go:530] duration metric: took 2.575137687s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 03:11:59.177014  651375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:11:59.182598  651375 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:11:59.182626  651375 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:11:59.676406  651375 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:11:59.680259  651375 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:11:59.681145  651375 api_server.go:141] control plane version: v1.34.1
	I1124 03:11:59.681173  651375 api_server.go:131] duration metric: took 3.005125501s to wait for apiserver health ...
	I1124 03:11:59.681182  651375 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:11:59.684160  651375 system_pods.go:59] 9 kube-system pods found
	I1124 03:11:59.684187  651375 system_pods.go:61] "coredns-66bc5c9577-b5rlp" [ec3ad010-7694-4640-9638-fe6f5c97f56a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:59.684195  651375 system_pods.go:61] "coredns-66bc5c9577-mwvq8" [c8831e7f-34c0-40c7-a728-7f7882ed604a] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:59.684204  651375 system_pods.go:61] "etcd-newest-cni-438041" [7acbb753-dfd2-4438-b370-a7e38c4fbc5d] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:11:59.684210  651375 system_pods.go:61] "kindnet-xp46p" [19fa7668-24bd-454c-a5df-37534a06d3a5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:11:59.684216  651375 system_pods.go:61] "kube-apiserver-newest-cni-438041" [c7d90375-f6c0-4a1f-8b80-81574119b191] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:11:59.684222  651375 system_pods.go:61] "kube-controller-manager-newest-cni-438041" [54b144f6-6f26-4e9b-818b-cbb2d7b4c0a4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:11:59.684232  651375 system_pods.go:61] "kube-proxy-n85pg" [86f875e2-7efc-4b60-b031-a1de71ea7502] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:11:59.684239  651375 system_pods.go:61] "kube-scheduler-newest-cni-438041" [75e99a3a-d4a9-4428-a52a-ef5ac4edc76c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:11:59.684253  651375 system_pods.go:61] "storage-provisioner" [9a94c2f7-e288-4528-b22c-f413d79bdf46] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 03:11:59.684259  651375 system_pods.go:74] duration metric: took 3.07116ms to wait for pod list to return data ...
	I1124 03:11:59.684269  651375 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:59.686477  651375 default_sa.go:45] found service account: "default"
	I1124 03:11:59.686502  651375 default_sa.go:55] duration metric: took 2.226294ms for default service account to be created ...
	I1124 03:11:59.686514  651375 kubeadm.go:587] duration metric: took 3.182843557s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 03:11:59.686535  651375 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:59.689409  651375 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:59.689435  651375 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:59.689452  651375 node_conditions.go:105] duration metric: took 2.911385ms to run NodePressure ...
	I1124 03:11:59.689464  651375 start.go:242] waiting for startup goroutines ...
	I1124 03:11:59.689471  651375 start.go:247] waiting for cluster config update ...
	I1124 03:11:59.689483  651375 start.go:256] writing updated cluster config ...
	I1124 03:11:59.689750  651375 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:59.736246  651375 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:11:59.738970  651375 out.go:179] * Done! kubectl is now configured to use "newest-cni-438041" cluster and "default" namespace by default
	I1124 03:11:59.129607  650744 addons.go:530] duration metric: took 3.54343142s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 03:11:59.130131  650744 system_pods.go:59] 8 kube-system pods found
	I1124 03:11:59.130177  650744 system_pods.go:61] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:59.130196  650744 system_pods.go:61] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:11:59.130216  650744 system_pods.go:61] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:11:59.130237  650744 system_pods.go:61] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:11:59.130251  650744 system_pods.go:61] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:11:59.130264  650744 system_pods.go:61] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:11:59.130277  650744 system_pods.go:61] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:11:59.130288  650744 system_pods.go:61] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:59.130298  650744 system_pods.go:74] duration metric: took 4.334684ms to wait for pod list to return data ...
	I1124 03:11:59.130311  650744 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:11:59.132423  650744 default_sa.go:45] found service account: "default"
	I1124 03:11:59.132442  650744 default_sa.go:55] duration metric: took 2.121754ms for default service account to be created ...
	I1124 03:11:59.132454  650744 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:11:59.135760  650744 system_pods.go:86] 8 kube-system pods found
	I1124 03:11:59.135789  650744 system_pods.go:89] "coredns-5dd5756b68-5nwx9" [1278c848-f63d-4e7c-879a-523510d29787] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:11:59.135800  650744 system_pods.go:89] "etcd-old-k8s-version-579951" [cd0eda89-c180-48a3-995a-e71b4ee27438] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:11:59.135815  650744 system_pods.go:89] "kindnet-gdpzl" [c6b50cfd-0b4f-4b88-8165-3c38f57d9e9d] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:11:59.135827  650744 system_pods.go:89] "kube-apiserver-old-k8s-version-579951" [f9981490-9a8b-4ce6-b88b-6ca5daf2d795] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:11:59.135836  650744 system_pods.go:89] "kube-controller-manager-old-k8s-version-579951" [7df7ab38-1467-4609-861e-b13cdf27a24d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:11:59.135854  650744 system_pods.go:89] "kube-proxy-r82jh" [07210933-4da6-4a8e-b29f-15bc6a74911b] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:11:59.135866  650744 system_pods.go:89] "kube-scheduler-old-k8s-version-579951" [5b71256a-e41e-4827-a4ce-afd11d220bb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:11:59.135874  650744 system_pods.go:89] "storage-provisioner" [b994a9c9-e16e-40e8-b8eb-682c5dfa7372] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:11:59.135901  650744 system_pods.go:126] duration metric: took 3.422391ms to wait for k8s-apps to be running ...
	I1124 03:11:59.135914  650744 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:11:59.135969  650744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:11:59.149453  650744 system_svc.go:56] duration metric: took 13.533924ms WaitForService to wait for kubelet
	I1124 03:11:59.149480  650744 kubeadm.go:587] duration metric: took 3.563448701s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:11:59.149500  650744 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:11:59.151973  650744 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:11:59.152000  650744 node_conditions.go:123] node cpu capacity is 8
	I1124 03:11:59.152016  650744 node_conditions.go:105] duration metric: took 2.510649ms to run NodePressure ...
	I1124 03:11:59.152029  650744 start.go:242] waiting for startup goroutines ...
	I1124 03:11:59.152038  650744 start.go:247] waiting for cluster config update ...
	I1124 03:11:59.152055  650744 start.go:256] writing updated cluster config ...
	I1124 03:11:59.152320  650744 ssh_runner.go:195] Run: rm -f paused
	I1124 03:11:59.156477  650744 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:11:59.161081  650744 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:12:01.167021  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:03.167269  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.125766044Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.126027673Z" level=info msg="Running pod sandbox: kube-system/kube-proxy-n85pg/POD" id=5ae6d350-208b-4290-92ea-99c5d6a00ea5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.126090392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.12883121Z" level=info msg="Ran pod sandbox 5fbc46437e798eb293c24a234c08d38684e883bc3fb5c526c7a9047be83255d0 with infra container: kube-system/kindnet-xp46p/POD" id=f49ed1ed-486a-412f-a01c-d708ea21a16a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.130018276Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=cd7cccbb-2936-4f53-842a-4be2e060df8a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.131378842Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=5ae6d350-208b-4290-92ea-99c5d6a00ea5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.131475992Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=65c3594e-0bec-4411-a85a-3bb7fb0c943e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.132660129Z" level=info msg="Creating container: kube-system/kindnet-xp46p/kindnet-cni" id=c0fddb86-8d94-4a33-9f0c-cca989f4359e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.132747542Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.133142693Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.133995431Z" level=info msg="Ran pod sandbox 7e177ac3a85fe32b36785510cafd415d883150ae1a844a6260ec2c1a42df24c2 with infra container: kube-system/kube-proxy-n85pg/POD" id=5ae6d350-208b-4290-92ea-99c5d6a00ea5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.134968586Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e01acad7-b751-4f93-8284-47a83fdf7151 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.136106499Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=40675711-f351-4aff-9773-e7149c24a6a4 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.136813745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.137425414Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.137503667Z" level=info msg="Creating container: kube-system/kube-proxy-n85pg/kube-proxy" id=a248f46b-2cb7-476b-9ca6-d34f51c589c1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.137621439Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.141526503Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.141932561Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.163702042Z" level=info msg="Created container ca8f3e49d19f2e01029131d694cb2f3366da510e6fc4954a37c6be9f877ca0a4: kube-system/kindnet-xp46p/kindnet-cni" id=c0fddb86-8d94-4a33-9f0c-cca989f4359e name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.164242999Z" level=info msg="Starting container: ca8f3e49d19f2e01029131d694cb2f3366da510e6fc4954a37c6be9f877ca0a4" id=972abd6d-a0a0-45c4-837e-d7dfff2d0e10 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.165996819Z" level=info msg="Started container" PID=1043 containerID=ca8f3e49d19f2e01029131d694cb2f3366da510e6fc4954a37c6be9f877ca0a4 description=kube-system/kindnet-xp46p/kindnet-cni id=972abd6d-a0a0-45c4-837e-d7dfff2d0e10 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5fbc46437e798eb293c24a234c08d38684e883bc3fb5c526c7a9047be83255d0
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.168524811Z" level=info msg="Created container e62bf0a89aa63020c936ad6a03e6c0480301613e4cd0a960cd58ad2ac3fe719f: kube-system/kube-proxy-n85pg/kube-proxy" id=a248f46b-2cb7-476b-9ca6-d34f51c589c1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.169031946Z" level=info msg="Starting container: e62bf0a89aa63020c936ad6a03e6c0480301613e4cd0a960cd58ad2ac3fe719f" id=c26ba391-fe77-4970-987b-86f108227528 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:11:59 newest-cni-438041 crio[525]: time="2025-11-24T03:11:59.172015469Z" level=info msg="Started container" PID=1044 containerID=e62bf0a89aa63020c936ad6a03e6c0480301613e4cd0a960cd58ad2ac3fe719f description=kube-system/kube-proxy-n85pg/kube-proxy id=c26ba391-fe77-4970-987b-86f108227528 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7e177ac3a85fe32b36785510cafd415d883150ae1a844a6260ec2c1a42df24c2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	e62bf0a89aa63       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   7e177ac3a85fe       kube-proxy-n85pg                            kube-system
	ca8f3e49d19f2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   5fbc46437e798       kindnet-xp46p                               kube-system
	0903674c0ff17       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   2361cf90a13f8       kube-apiserver-newest-cni-438041            kube-system
	5dcec9dda2453       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   31fc8d0f26d1e       kube-controller-manager-newest-cni-438041   kube-system
	453c0dc25dde5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   8a21d9f9217f3       etcd-newest-cni-438041                      kube-system
	a629768f55496       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   86d1f5bca10ab       kube-scheduler-newest-cni-438041            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-438041
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-438041
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=newest-cni-438041
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_11_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:11:20 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-438041
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:11:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:11:58 +0000   Mon, 24 Nov 2025 03:11:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:11:58 +0000   Mon, 24 Nov 2025 03:11:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:11:58 +0000   Mon, 24 Nov 2025 03:11:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 03:11:58 +0000   Mon, 24 Nov 2025 03:11:18 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-438041
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                6b4f4c50-807c-4c82-a9aa-10eb04614b7a
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-438041                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         43s
	  kube-system                 kindnet-xp46p                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      38s
	  kube-system                 kube-apiserver-newest-cni-438041             250m (3%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-newest-cni-438041    200m (2%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-n85pg                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-scheduler-newest-cni-438041             100m (1%)     0 (0%)      0 (0%)           0 (0%)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 36s   kube-proxy       
	  Normal  Starting                 6s    kube-proxy       
	  Normal  Starting                 43s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s   kubelet          Node newest-cni-438041 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s   kubelet          Node newest-cni-438041 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s   kubelet          Node newest-cni-438041 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s   node-controller  Node newest-cni-438041 event: Registered Node newest-cni-438041 in Controller
	  Normal  RegisteredNode           4s    node-controller  Node newest-cni-438041 event: Registered Node newest-cni-438041 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [453c0dc25dde51ccdc58f6043d75d117dc72d3b347ea5068c17db0082002c0ad] <==
	{"level":"warn","ts":"2025-11-24T03:11:57.739172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.744954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.754282Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.762119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.769570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.778424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.786039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.805276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.809138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.823265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42950","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.837245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.844772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.850649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.856566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.863385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.870863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.877047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.882850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.889601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.895844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.903801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.918557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.927434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:57.935738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:11:58.008527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43236","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:12:05 up  1:54,  0 user,  load average: 3.79, 3.84, 2.53
	Linux newest-cni-438041 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ca8f3e49d19f2e01029131d694cb2f3366da510e6fc4954a37c6be9f877ca0a4] <==
	I1124 03:11:59.364644       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:11:59.364984       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 03:11:59.365144       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:11:59.365159       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:11:59.365179       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:11:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:11:59.649482       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:11:59.649517       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:11:59.649533       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:11:59.650121       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:11:59.949816       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:11:59.949849       1 metrics.go:72] Registering metrics
	I1124 03:11:59.949928       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [0903674c0ff17f5f88d257aea9b1e2cf56ff9103105cdbeb4e86732b145c0bef] <==
	I1124 03:11:58.520225       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 03:11:58.522634       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 03:11:58.523186       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 03:11:58.523356       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 03:11:58.523412       1 aggregator.go:171] initial CRD sync complete...
	I1124 03:11:58.523431       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 03:11:58.523437       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 03:11:58.523444       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:11:58.523606       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 03:11:58.523659       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:11:58.528867       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 03:11:58.532581       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 03:11:58.544396       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:11:58.835348       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:11:58.869049       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:11:58.899830       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:11:58.921311       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:11:58.928584       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:11:58.976701       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.138.81"}
	I1124 03:11:58.987738       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.237.249"}
	I1124 03:11:59.414454       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:12:02.230862       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:12:02.281400       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:12:02.330655       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:12:02.381285       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [5dcec9dda2453f45f4516eff019d2077d2052e95c11d896705f53b3ac53c11a9] <==
	I1124 03:12:01.786270       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 03:12:01.788477       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 03:12:01.795388       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:12:01.798651       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:12:01.801826       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:12:01.828538       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:12:01.828587       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:12:01.828598       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:12:01.828606       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:12:01.828622       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:12:01.828651       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 03:12:01.828669       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 03:12:01.828720       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:12:01.828842       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:12:01.828963       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 03:12:01.829313       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 03:12:01.831284       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:12:01.833452       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:12:01.833488       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:12:01.834587       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 03:12:01.834627       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 03:12:01.834679       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 03:12:01.834687       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 03:12:01.834691       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 03:12:01.849833       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e62bf0a89aa63020c936ad6a03e6c0480301613e4cd0a960cd58ad2ac3fe719f] <==
	I1124 03:11:59.212239       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:11:59.279078       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:11:59.379718       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:11:59.379761       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 03:11:59.379963       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:11:59.397342       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:11:59.397399       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:11:59.402822       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:11:59.403205       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:11:59.403226       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:11:59.404460       1 config.go:200] "Starting service config controller"
	I1124 03:11:59.404486       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:11:59.404564       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:11:59.404586       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:11:59.404739       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:11:59.404761       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:11:59.404809       1 config.go:309] "Starting node config controller"
	I1124 03:11:59.404814       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:11:59.404818       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:11:59.504577       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:11:59.505732       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:11:59.505785       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [a629768f55496c2969d757c473189f52d99ddea90e0a365150097df5fe2ec9e2] <==
	I1124 03:11:57.020490       1 serving.go:386] Generated self-signed cert in-memory
	W1124 03:11:58.466869       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 03:11:58.466956       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 03:11:58.466969       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 03:11:58.466988       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 03:11:58.507144       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 03:11:58.507199       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:11:58.510544       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:11:58.510587       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:11:58.511577       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 03:11:58.511843       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:11:58.611657       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:11:57 newest-cni-438041 kubelet[674]: E1124 03:11:57.886686     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-438041\" not found" node="newest-cni-438041"
	Nov 24 03:11:57 newest-cni-438041 kubelet[674]: E1124 03:11:57.886829     674 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-438041\" not found" node="newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.438088     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.543727     674 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.545006     674 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.545052     674 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.545962     674 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: E1124 03:11:58.557289     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-438041\" already exists" pod="kube-system/kube-apiserver-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.557331     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: E1124 03:11:58.565301     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-438041\" already exists" pod="kube-system/kube-controller-manager-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.565336     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: E1124 03:11:58.573007     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-438041\" already exists" pod="kube-system/kube-scheduler-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.573035     674 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: E1124 03:11:58.581692     674 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-438041\" already exists" pod="kube-system/etcd-newest-cni-438041"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.812209     674 apiserver.go:52] "Watching apiserver"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.829949     674 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.832023     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19fa7668-24bd-454c-a5df-37534a06d3a5-xtables-lock\") pod \"kindnet-xp46p\" (UID: \"19fa7668-24bd-454c-a5df-37534a06d3a5\") " pod="kube-system/kindnet-xp46p"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.832116     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86f875e2-7efc-4b60-b031-a1de71ea7502-lib-modules\") pod \"kube-proxy-n85pg\" (UID: \"86f875e2-7efc-4b60-b031-a1de71ea7502\") " pod="kube-system/kube-proxy-n85pg"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.832709     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19fa7668-24bd-454c-a5df-37534a06d3a5-lib-modules\") pod \"kindnet-xp46p\" (UID: \"19fa7668-24bd-454c-a5df-37534a06d3a5\") " pod="kube-system/kindnet-xp46p"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.832747     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86f875e2-7efc-4b60-b031-a1de71ea7502-xtables-lock\") pod \"kube-proxy-n85pg\" (UID: \"86f875e2-7efc-4b60-b031-a1de71ea7502\") " pod="kube-system/kube-proxy-n85pg"
	Nov 24 03:11:58 newest-cni-438041 kubelet[674]: I1124 03:11:58.832783     674 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/19fa7668-24bd-454c-a5df-37534a06d3a5-cni-cfg\") pod \"kindnet-xp46p\" (UID: \"19fa7668-24bd-454c-a5df-37534a06d3a5\") " pod="kube-system/kindnet-xp46p"
	Nov 24 03:12:00 newest-cni-438041 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 03:12:00 newest-cni-438041 kubelet[674]: I1124 03:12:00.689240     674 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 24 03:12:00 newest-cni-438041 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 03:12:00 newest-cni-438041 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-438041 -n newest-cni-438041
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-438041 -n newest-cni-438041: exit status 2 (344.390666ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-438041 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-b5rlp coredns-66bc5c9577-mwvq8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lgwxm kubernetes-dashboard-855c9754f9-4l8m4
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-438041 describe pod coredns-66bc5c9577-b5rlp coredns-66bc5c9577-mwvq8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lgwxm kubernetes-dashboard-855c9754f9-4l8m4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-438041 describe pod coredns-66bc5c9577-b5rlp coredns-66bc5c9577-mwvq8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lgwxm kubernetes-dashboard-855c9754f9-4l8m4: exit status 1 (66.620791ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-b5rlp" not found
	Error from server (NotFound): pods "coredns-66bc5c9577-mwvq8" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-lgwxm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-4l8m4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-438041 describe pod coredns-66bc5c9577-b5rlp coredns-66bc5c9577-mwvq8 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-lgwxm kubernetes-dashboard-855c9754f9-4l8m4: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-579951 --alsologtostderr -v=1
E1124 03:12:49.380766  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/auto-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-579951 --alsologtostderr -v=1: exit status 80 (2.096578059s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-579951 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:12:48.307077  665649 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:12:48.307300  665649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:48.307307  665649 out.go:374] Setting ErrFile to fd 2...
	I1124 03:12:48.307312  665649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:48.307486  665649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:12:48.307719  665649 out.go:368] Setting JSON to false
	I1124 03:12:48.307742  665649 mustload.go:66] Loading cluster: old-k8s-version-579951
	I1124 03:12:48.308103  665649 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:12:48.308489  665649 cli_runner.go:164] Run: docker container inspect old-k8s-version-579951 --format={{.State.Status}}
	I1124 03:12:48.326249  665649 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:12:48.326484  665649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:48.382241  665649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-24 03:12:48.37171773 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:48.382836  665649 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763935228-21975/minikube-v1.37.0-1763935228-21975-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763935228-21975-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-579951 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 03:12:48.385184  665649 out.go:179] * Pausing node old-k8s-version-579951 ... 
	I1124 03:12:48.386155  665649 host.go:66] Checking if "old-k8s-version-579951" exists ...
	I1124 03:12:48.386442  665649 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:48.386489  665649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-579951
	I1124 03:12:48.405140  665649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33478 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/old-k8s-version-579951/id_rsa Username:docker}
	I1124 03:12:48.502655  665649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:48.515169  665649 pause.go:52] kubelet running: true
	I1124 03:12:48.515241  665649 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:12:48.677492  665649 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:12:48.677589  665649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:12:48.745244  665649 cri.go:89] found id: "cb140932ac86175b67bebb44bcf5349167921dded9f33f07bf73f2c99536262b"
	I1124 03:12:48.745263  665649 cri.go:89] found id: "4c225ee065df81dadf568669357be2b97899826cbcb60f9c3ac3b714637ac073"
	I1124 03:12:48.745267  665649 cri.go:89] found id: "ed35b2d1051955151fc61b8cc47f4e8d1bd605dd3fe30e9571e13bb8c6a72a2d"
	I1124 03:12:48.745270  665649 cri.go:89] found id: "cbd2e7dfcfb37a19af31d60fb1906fc2f2ff1f04f8b5e0b378efbf444e50673f"
	I1124 03:12:48.745273  665649 cri.go:89] found id: "bbc5f27e635d1171390cb9cc082c8e71358be7dd9d3966888be81466bec32466"
	I1124 03:12:48.745276  665649 cri.go:89] found id: "cc8b5ee4851c9ae1241dd77995f3d1a2e725abb08136f47c106f5adf7f25f2a7"
	I1124 03:12:48.745279  665649 cri.go:89] found id: "3176f2d8220eaa411e72fa77d582041c78e4d0b8acbd739cd01992ec3cfa7230"
	I1124 03:12:48.745282  665649 cri.go:89] found id: "30d22d684ad7501e38080ff45bbe87f71a21252754ba692fc20125e3845f807a"
	I1124 03:12:48.745286  665649 cri.go:89] found id: "3356da3bf9c8232ed305911fa37644fd0513640f4477238b1a7e39b8e438c2a0"
	I1124 03:12:48.745294  665649 cri.go:89] found id: "ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a"
	I1124 03:12:48.745298  665649 cri.go:89] found id: "c0829291d94ab54222f0c979e770045678982177db6d180fb2f94c79be1258de"
	I1124 03:12:48.745302  665649 cri.go:89] found id: ""
	I1124 03:12:48.745347  665649 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:12:48.757762  665649 retry.go:31] will retry after 231.012255ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:48Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:48.989247  665649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:49.002910  665649 pause.go:52] kubelet running: false
	I1124 03:12:49.003001  665649 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:12:49.172288  665649 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:12:49.172390  665649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:12:49.239775  665649 cri.go:89] found id: "cb140932ac86175b67bebb44bcf5349167921dded9f33f07bf73f2c99536262b"
	I1124 03:12:49.239800  665649 cri.go:89] found id: "4c225ee065df81dadf568669357be2b97899826cbcb60f9c3ac3b714637ac073"
	I1124 03:12:49.239804  665649 cri.go:89] found id: "ed35b2d1051955151fc61b8cc47f4e8d1bd605dd3fe30e9571e13bb8c6a72a2d"
	I1124 03:12:49.239808  665649 cri.go:89] found id: "cbd2e7dfcfb37a19af31d60fb1906fc2f2ff1f04f8b5e0b378efbf444e50673f"
	I1124 03:12:49.239811  665649 cri.go:89] found id: "bbc5f27e635d1171390cb9cc082c8e71358be7dd9d3966888be81466bec32466"
	I1124 03:12:49.239814  665649 cri.go:89] found id: "cc8b5ee4851c9ae1241dd77995f3d1a2e725abb08136f47c106f5adf7f25f2a7"
	I1124 03:12:49.239817  665649 cri.go:89] found id: "3176f2d8220eaa411e72fa77d582041c78e4d0b8acbd739cd01992ec3cfa7230"
	I1124 03:12:49.239820  665649 cri.go:89] found id: "30d22d684ad7501e38080ff45bbe87f71a21252754ba692fc20125e3845f807a"
	I1124 03:12:49.239822  665649 cri.go:89] found id: "3356da3bf9c8232ed305911fa37644fd0513640f4477238b1a7e39b8e438c2a0"
	I1124 03:12:49.239828  665649 cri.go:89] found id: "ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a"
	I1124 03:12:49.239830  665649 cri.go:89] found id: "c0829291d94ab54222f0c979e770045678982177db6d180fb2f94c79be1258de"
	I1124 03:12:49.239833  665649 cri.go:89] found id: ""
	I1124 03:12:49.239875  665649 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:12:49.252076  665649 retry.go:31] will retry after 239.96318ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:49Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:49.492329  665649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:49.505782  665649 pause.go:52] kubelet running: false
	I1124 03:12:49.505845  665649 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:12:49.644838  665649 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:12:49.644917  665649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:12:49.713633  665649 cri.go:89] found id: "cb140932ac86175b67bebb44bcf5349167921dded9f33f07bf73f2c99536262b"
	I1124 03:12:49.713651  665649 cri.go:89] found id: "4c225ee065df81dadf568669357be2b97899826cbcb60f9c3ac3b714637ac073"
	I1124 03:12:49.713655  665649 cri.go:89] found id: "ed35b2d1051955151fc61b8cc47f4e8d1bd605dd3fe30e9571e13bb8c6a72a2d"
	I1124 03:12:49.713659  665649 cri.go:89] found id: "cbd2e7dfcfb37a19af31d60fb1906fc2f2ff1f04f8b5e0b378efbf444e50673f"
	I1124 03:12:49.713662  665649 cri.go:89] found id: "bbc5f27e635d1171390cb9cc082c8e71358be7dd9d3966888be81466bec32466"
	I1124 03:12:49.713665  665649 cri.go:89] found id: "cc8b5ee4851c9ae1241dd77995f3d1a2e725abb08136f47c106f5adf7f25f2a7"
	I1124 03:12:49.713668  665649 cri.go:89] found id: "3176f2d8220eaa411e72fa77d582041c78e4d0b8acbd739cd01992ec3cfa7230"
	I1124 03:12:49.713671  665649 cri.go:89] found id: "30d22d684ad7501e38080ff45bbe87f71a21252754ba692fc20125e3845f807a"
	I1124 03:12:49.713674  665649 cri.go:89] found id: "3356da3bf9c8232ed305911fa37644fd0513640f4477238b1a7e39b8e438c2a0"
	I1124 03:12:49.713679  665649 cri.go:89] found id: "ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a"
	I1124 03:12:49.713682  665649 cri.go:89] found id: "c0829291d94ab54222f0c979e770045678982177db6d180fb2f94c79be1258de"
	I1124 03:12:49.713684  665649 cri.go:89] found id: ""
	I1124 03:12:49.713731  665649 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:12:49.725764  665649 retry.go:31] will retry after 374.20166ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:49Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:50.101077  665649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:50.114170  665649 pause.go:52] kubelet running: false
	I1124 03:12:50.114220  665649 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:12:50.256569  665649 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:12:50.256656  665649 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:12:50.323374  665649 cri.go:89] found id: "cb140932ac86175b67bebb44bcf5349167921dded9f33f07bf73f2c99536262b"
	I1124 03:12:50.323395  665649 cri.go:89] found id: "4c225ee065df81dadf568669357be2b97899826cbcb60f9c3ac3b714637ac073"
	I1124 03:12:50.323402  665649 cri.go:89] found id: "ed35b2d1051955151fc61b8cc47f4e8d1bd605dd3fe30e9571e13bb8c6a72a2d"
	I1124 03:12:50.323407  665649 cri.go:89] found id: "cbd2e7dfcfb37a19af31d60fb1906fc2f2ff1f04f8b5e0b378efbf444e50673f"
	I1124 03:12:50.323412  665649 cri.go:89] found id: "bbc5f27e635d1171390cb9cc082c8e71358be7dd9d3966888be81466bec32466"
	I1124 03:12:50.323417  665649 cri.go:89] found id: "cc8b5ee4851c9ae1241dd77995f3d1a2e725abb08136f47c106f5adf7f25f2a7"
	I1124 03:12:50.323421  665649 cri.go:89] found id: "3176f2d8220eaa411e72fa77d582041c78e4d0b8acbd739cd01992ec3cfa7230"
	I1124 03:12:50.323425  665649 cri.go:89] found id: "30d22d684ad7501e38080ff45bbe87f71a21252754ba692fc20125e3845f807a"
	I1124 03:12:50.323430  665649 cri.go:89] found id: "3356da3bf9c8232ed305911fa37644fd0513640f4477238b1a7e39b8e438c2a0"
	I1124 03:12:50.323446  665649 cri.go:89] found id: "ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a"
	I1124 03:12:50.323451  665649 cri.go:89] found id: "c0829291d94ab54222f0c979e770045678982177db6d180fb2f94c79be1258de"
	I1124 03:12:50.323455  665649 cri.go:89] found id: ""
	I1124 03:12:50.323493  665649 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:12:50.336998  665649 out.go:203] 
	W1124 03:12:50.338555  665649 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:50Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:50Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:12:50.338575  665649 out.go:285] * 
	* 
	W1124 03:12:50.343418  665649 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:12:50.344602  665649 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-579951 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-579951
helpers_test.go:243: (dbg) docker inspect old-k8s-version-579951:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7",
	        "Created": "2025-11-24T03:10:32.99838887Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 650944,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:11:48.56517649Z",
	            "FinishedAt": "2025-11-24T03:11:47.732440396Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7/hosts",
	        "LogPath": "/var/lib/docker/containers/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7-json.log",
	        "Name": "/old-k8s-version-579951",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-579951:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-579951",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7",
	                "LowerDir": "/var/lib/docker/overlay2/7475b95118b29264700554efe3a336409ac4e9cbb1a3e2671ad217c583d5e887-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7475b95118b29264700554efe3a336409ac4e9cbb1a3e2671ad217c583d5e887/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7475b95118b29264700554efe3a336409ac4e9cbb1a3e2671ad217c583d5e887/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7475b95118b29264700554efe3a336409ac4e9cbb1a3e2671ad217c583d5e887/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-579951",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-579951/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-579951",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-579951",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-579951",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "011105d85338a845940181cca52916d7f23b363fc02f1d2de87d5e91349bd4a9",
	            "SandboxKey": "/var/run/docker/netns/011105d85338",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-579951": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ca041b7f18e6d1ec0481cbe24b969048a40ddf73308219ebc68c053037d8a9f",
	                    "EndpointID": "4a24fac5c1f388d38c08fbcc81dc86ef83be1d89ce8a662edd66ef26dffb3bcc",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "86:aa:bc:3c:36:eb",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-579951",
	                        "3f9d9080b81a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-579951 -n old-k8s-version-579951
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-579951 -n old-k8s-version-579951: exit status 2 (325.071849ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-579951 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-579951 logs -n 25: (1.132033012s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:11 UTC │
	│ addons  │ enable metrics-server -p newest-cni-438041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-579951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p newest-cni-438041 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ stop    │ -p old-k8s-version-579951 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-603010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993813 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ stop    │ -p no-preload-603010 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-579951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p old-k8s-version-579951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-438041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ image   │ newest-cni-438041 image list --format=json                                                                                                                                                                                                    │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p newest-cni-438041 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-603010 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                                                                                          │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p no-preload-603010 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ -p newest-cni-438041                                                                                                                                                                                                                          │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p disable-driver-mounts-242597                                                                                                                                                                                                               │ disable-driver-mounts-242597 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ image   │ old-k8s-version-579951 image list --format=json                                                                                                                                                                                               │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p old-k8s-version-579951 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:12:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:12:09.055015  658811 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:12:09.055230  658811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:09.055247  658811 out.go:374] Setting ErrFile to fd 2...
	I1124 03:12:09.055253  658811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:09.055468  658811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:12:09.055909  658811 out.go:368] Setting JSON to false
	I1124 03:12:09.056956  658811 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6876,"bootTime":1763947053,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:12:09.057009  658811 start.go:143] virtualization: kvm guest
	I1124 03:12:09.058671  658811 out.go:179] * [embed-certs-284604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:12:09.059850  658811 notify.go:221] Checking for updates...
	I1124 03:12:09.059855  658811 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:12:09.061128  658811 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:12:09.062317  658811 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:09.063358  658811 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:12:09.064255  658811 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:12:09.065078  658811 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:12:09.066407  658811 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.066509  658811 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.066589  658811 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:12:09.066666  658811 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:12:09.089713  658811 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:12:09.089855  658811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:09.145948  658811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:12:09.135562124 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:09.146071  658811 docker.go:319] overlay module found
	I1124 03:12:09.147708  658811 out.go:179] * Using the docker driver based on user configuration
	I1124 03:12:09.148714  658811 start.go:309] selected driver: docker
	I1124 03:12:09.148737  658811 start.go:927] validating driver "docker" against <nil>
	I1124 03:12:09.148747  658811 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:12:09.149338  658811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:09.210343  658811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:12:09.200351707 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:09.210534  658811 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:12:09.210794  658811 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:09.212381  658811 out.go:179] * Using Docker driver with root privileges
	I1124 03:12:09.213398  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:09.213482  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:09.213497  658811 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:12:09.213574  658811 start.go:353] cluster config:
	{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:09.214730  658811 out.go:179] * Starting "embed-certs-284604" primary control-plane node in "embed-certs-284604" cluster
	I1124 03:12:09.215613  658811 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:12:09.216663  658811 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:12:09.217654  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:09.217694  658811 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:12:09.217703  658811 cache.go:65] Caching tarball of preloaded images
	I1124 03:12:09.217732  658811 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:12:09.217791  658811 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:12:09.217808  658811 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:12:09.217977  658811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:12:09.218021  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json: {Name:mkd4898576ebe0ebf6d2ca35fddd33eac8f127df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:09.239944  658811 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:12:09.239962  658811 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:12:09.239976  658811 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:12:09.240004  658811 start.go:360] acquireMachinesLock for embed-certs-284604: {Name:mkd39be5908e1d289ed5af40b6c2b1c510beffd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:12:09.240088  658811 start.go:364] duration metric: took 68.665µs to acquireMachinesLock for "embed-certs-284604"
	I1124 03:12:09.240109  658811 start.go:93] Provisioning new machine with config: &{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:09.240182  658811 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:12:05.014758  656542 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-993813" ...
	I1124 03:12:05.014805  656542 cli_runner.go:164] Run: docker start default-k8s-diff-port-993813
	I1124 03:12:05.297424  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:05.316835  656542 kic.go:430] container "default-k8s-diff-port-993813" state is running.
	I1124 03:12:05.317309  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:05.336690  656542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/config.json ...
	I1124 03:12:05.336923  656542 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:05.336992  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:05.356564  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:05.356863  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:05.356907  656542 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:05.357642  656542 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39256->127.0.0.1:33488: read: connection reset by peer
	I1124 03:12:08.497704  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:12:08.497744  656542 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-993813"
	I1124 03:12:08.497799  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:08.516284  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:08.516620  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:08.516642  656542 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993813 && echo "default-k8s-diff-port-993813" | sudo tee /etc/hostname
	I1124 03:12:08.664299  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:12:08.664399  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:08.683215  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:08.683424  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:08.683440  656542 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993813/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:08.824495  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:08.824534  656542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:08.824571  656542 ubuntu.go:190] setting up certificates
	I1124 03:12:08.824597  656542 provision.go:84] configureAuth start
	I1124 03:12:08.824659  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:08.842592  656542 provision.go:143] copyHostCerts
	I1124 03:12:08.842639  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:08.842651  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:08.842701  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:08.842805  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:08.842813  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:08.842838  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:08.842940  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:08.842950  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:08.842981  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:08.843051  656542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993813 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-993813 localhost minikube]
	I1124 03:12:08.993088  656542 provision.go:177] copyRemoteCerts
	I1124 03:12:08.993141  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:08.993180  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.010481  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.112610  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:09.134182  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 03:12:09.153393  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:12:09.173516  656542 provision.go:87] duration metric: took 348.902104ms to configureAuth
	I1124 03:12:09.173547  656542 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:09.173717  656542 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.173820  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.195519  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:09.195738  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:09.195756  656542 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:09.551404  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:09.551434  656542 machine.go:97] duration metric: took 4.214494542s to provisionDockerMachine
	I1124 03:12:09.551449  656542 start.go:293] postStartSetup for "default-k8s-diff-port-993813" (driver="docker")
	I1124 03:12:09.551463  656542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:09.551533  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:09.551574  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.572440  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.684044  656542 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:09.688328  656542 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:09.688354  656542 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:09.688365  656542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:09.688414  656542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:09.688488  656542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:09.688660  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:09.696023  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:09.725715  656542 start.go:296] duration metric: took 174.248037ms for postStartSetup
	I1124 03:12:09.725795  656542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:09.725851  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.747235  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:06.610202  657716 out.go:252] * Restarting existing docker container for "no-preload-603010" ...
	I1124 03:12:06.610267  657716 cli_runner.go:164] Run: docker start no-preload-603010
	I1124 03:12:06.895418  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:06.913279  657716 kic.go:430] container "no-preload-603010" state is running.
	I1124 03:12:06.913694  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:06.931543  657716 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/config.json ...
	I1124 03:12:06.931779  657716 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:06.931840  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:06.949180  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:06.949422  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:06.949436  657716 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:06.950106  657716 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53738->127.0.0.1:33493: read: connection reset by peer
	I1124 03:12:10.094410  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-603010
	
	I1124 03:12:10.094455  657716 ubuntu.go:182] provisioning hostname "no-preload-603010"
	I1124 03:12:10.094548  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.117277  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.117614  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.117637  657716 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-603010 && echo "no-preload-603010" | sudo tee /etc/hostname
	I1124 03:12:10.272082  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-603010
	
	I1124 03:12:10.272162  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.293197  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.293525  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.293557  657716 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-603010' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-603010/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-603010' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:10.440289  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:10.440322  657716 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:10.440350  657716 ubuntu.go:190] setting up certificates
	I1124 03:12:10.440374  657716 provision.go:84] configureAuth start
	I1124 03:12:10.440443  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:10.458672  657716 provision.go:143] copyHostCerts
	I1124 03:12:10.458743  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:10.458766  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:10.458857  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:10.459021  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:10.459037  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:10.459080  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:10.459183  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:10.459195  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:10.459232  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:10.459323  657716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.no-preload-603010 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-603010]
	I1124 03:12:10.546420  657716 provision.go:177] copyRemoteCerts
	I1124 03:12:10.546503  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:10.546552  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.564799  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:10.669343  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:10.687953  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:12:10.707320  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:12:10.728398  657716 provision.go:87] duration metric: took 288.002675ms to configureAuth
	I1124 03:12:10.728450  657716 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:10.728791  657716 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:10.728992  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.754544  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.754857  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.754907  657716 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:09.846210  656542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:09.851045  656542 fix.go:56] duration metric: took 4.853815531s for fixHost
	I1124 03:12:09.851067  656542 start.go:83] releasing machines lock for "default-k8s-diff-port-993813", held for 4.853861223s
	I1124 03:12:09.851139  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:09.871679  656542 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:09.871744  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.871767  656542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:09.871859  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.897665  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.897832  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.996390  656542 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:10.070447  656542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:10.108350  656542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:10.113659  656542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:10.113732  656542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:10.122258  656542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:12:10.122274  656542 start.go:496] detecting cgroup driver to use...
	I1124 03:12:10.122301  656542 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:10.122333  656542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:10.138420  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:10.151623  656542 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:10.151696  656542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:10.169717  656542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:10.185403  656542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:10.268937  656542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:10.361626  656542 docker.go:234] disabling docker service ...
	I1124 03:12:10.361713  656542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:10.376259  656542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:10.389709  656542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:10.493317  656542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:10.581163  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:10.594309  656542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:10.608489  656542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:10.608559  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.618090  656542 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:10.618147  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.629142  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.639755  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.648289  656542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:10.657390  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.667835  656542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.677148  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.686554  656542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:10.694262  656542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:10.701983  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:10.784645  656542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:13.176259  656542 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.391580237s)
	I1124 03:12:13.176297  656542 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:13.176344  656542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:13.182771  656542 start.go:564] Will wait 60s for crictl version
	I1124 03:12:13.182920  656542 ssh_runner.go:195] Run: which crictl
	I1124 03:12:13.188282  656542 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:13.221129  656542 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:13.221208  656542 ssh_runner.go:195] Run: crio --version
	I1124 03:12:13.256022  656542 ssh_runner.go:195] Run: crio --version
	I1124 03:12:13.289098  656542 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1124 03:12:09.667322  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:11.810684  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:09.241811  658811 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:12:09.242074  658811 start.go:159] libmachine.API.Create for "embed-certs-284604" (driver="docker")
	I1124 03:12:09.242107  658811 client.go:173] LocalClient.Create starting
	I1124 03:12:09.242186  658811 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 03:12:09.242224  658811 main.go:143] libmachine: Decoding PEM data...
	I1124 03:12:09.242246  658811 main.go:143] libmachine: Parsing certificate...
	I1124 03:12:09.242326  658811 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 03:12:09.242354  658811 main.go:143] libmachine: Decoding PEM data...
	I1124 03:12:09.242374  658811 main.go:143] libmachine: Parsing certificate...
	I1124 03:12:09.242824  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:12:09.259427  658811 cli_runner.go:211] docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:12:09.259477  658811 network_create.go:284] running [docker network inspect embed-certs-284604] to gather additional debugging logs...
	I1124 03:12:09.259492  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604
	W1124 03:12:09.275004  658811 cli_runner.go:211] docker network inspect embed-certs-284604 returned with exit code 1
	I1124 03:12:09.275029  658811 network_create.go:287] error running [docker network inspect embed-certs-284604]: docker network inspect embed-certs-284604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-284604 not found
	I1124 03:12:09.275039  658811 network_create.go:289] output of [docker network inspect embed-certs-284604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-284604 not found
	
	** /stderr **
	I1124 03:12:09.275132  658811 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:09.292074  658811 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-58688175ab6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:9d:6a:fc:5c:13} reservation:<nil>}
	I1124 03:12:09.292745  658811 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf033568456f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:35:42:16:23:28} reservation:<nil>}
	I1124 03:12:09.293207  658811 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ecb12099844 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ce:07:19:c6:91:6e} reservation:<nil>}
	I1124 03:12:09.293801  658811 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-50b2e4e61586 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:0d:88:19:7d:df} reservation:<nil>}
	I1124 03:12:09.294406  658811 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6fb41680caed IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:a0:6c:44:95:b2} reservation:<nil>}
	I1124 03:12:09.295273  658811 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eef7f0}
	I1124 03:12:09.295296  658811 network_create.go:124] attempt to create docker network embed-certs-284604 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 03:12:09.295333  658811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-284604 embed-certs-284604
	I1124 03:12:09.341016  658811 network_create.go:108] docker network embed-certs-284604 192.168.94.0/24 created
	I1124 03:12:09.341044  658811 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-284604" container
	I1124 03:12:09.341097  658811 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:12:09.358710  658811 cli_runner.go:164] Run: docker volume create embed-certs-284604 --label name.minikube.sigs.k8s.io=embed-certs-284604 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:12:09.377491  658811 oci.go:103] Successfully created a docker volume embed-certs-284604
	I1124 03:12:09.377565  658811 cli_runner.go:164] Run: docker run --rm --name embed-certs-284604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-284604 --entrypoint /usr/bin/test -v embed-certs-284604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:12:09.757637  658811 oci.go:107] Successfully prepared a docker volume embed-certs-284604
	I1124 03:12:09.757726  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:09.757742  658811 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:12:09.757816  658811 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-284604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:12:13.055592  658811 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-284604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (3.297719307s)
	I1124 03:12:13.055632  658811 kic.go:203] duration metric: took 3.29788472s to extract preloaded images to volume ...
	W1124 03:12:13.055721  658811 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:12:13.055758  658811 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:12:13.055810  658811 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:12:13.124836  658811 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-284604 --name embed-certs-284604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-284604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-284604 --network embed-certs-284604 --ip 192.168.94.2 --volume embed-certs-284604:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:12:13.468642  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Running}}
	I1124 03:12:13.493010  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.520114  658811 cli_runner.go:164] Run: docker exec embed-certs-284604 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:12:13.579438  658811 oci.go:144] the created container "embed-certs-284604" has a running status.
	I1124 03:12:13.579473  658811 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa...
	I1124 03:12:13.686392  658811 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:12:13.719014  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.744934  658811 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:12:13.744979  658811 kic_runner.go:114] Args: [docker exec --privileged embed-certs-284604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:12:13.804379  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.833184  658811 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:13.833391  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:13.865266  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:13.865635  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:13.865670  658811 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:13.866448  658811 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55158->127.0.0.1:33498: read: connection reset by peer
	I1124 03:12:13.290552  656542 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:13.314170  656542 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:13.318716  656542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:13.333300  656542 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:13.333436  656542 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:13.333523  656542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:13.375001  656542 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:13.375027  656542 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:12:13.375078  656542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:13.407152  656542 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:13.407180  656542 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:13.407190  656542 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 03:12:13.407342  656542 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:13.407444  656542 ssh_runner.go:195] Run: crio config
	I1124 03:12:13.468159  656542 cni.go:84] Creating CNI manager for ""
	I1124 03:12:13.468191  656542 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:13.468220  656542 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:13.468251  656542 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993813 NodeName:default-k8s-diff-port-993813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:13.468425  656542 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993813"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:13.468485  656542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:13.480922  656542 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:13.480989  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:13.491437  656542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 03:12:13.510538  656542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:13.531599  656542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 03:12:13.550625  656542 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:13.557123  656542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:13.570105  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:13.687069  656542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:13.711246  656542 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813 for IP: 192.168.76.2
	I1124 03:12:13.711268  656542 certs.go:195] generating shared ca certs ...
	I1124 03:12:13.711287  656542 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:13.711456  656542 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:13.711513  656542 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:13.711526  656542 certs.go:257] generating profile certs ...
	I1124 03:12:13.711642  656542 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key
	I1124 03:12:13.711706  656542 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619
	I1124 03:12:13.711753  656542 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key
	I1124 03:12:13.711996  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:13.712051  656542 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:13.712065  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:13.712101  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:13.712139  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:13.712175  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:13.712240  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:13.712851  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:13.744604  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:13.773924  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:13.797454  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:13.831783  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 03:12:13.870484  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:13.900124  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:13.922822  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:12:13.948171  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:13.977351  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:14.003032  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:14.029032  656542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:14.044929  656542 ssh_runner.go:195] Run: openssl version
	I1124 03:12:14.055102  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:14.069569  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.074149  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.074206  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.129455  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:14.139467  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:14.150460  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.155547  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.155598  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.213122  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:14.224488  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:14.235043  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.239741  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.239796  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.296275  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:14.307247  656542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:14.315784  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:12:14.374911  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:12:14.452037  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:12:14.514532  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:12:14.577046  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:12:14.634822  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:12:14.697600  656542 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:14.697704  656542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:14.697759  656542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:14.736428  656542 cri.go:89] found id: "9d08a55f25f2dea3825f91aa9365cd9a3093b4582eda50f03b22c2ecc00f92c6"
	I1124 03:12:14.736451  656542 cri.go:89] found id: "a7d5f73dd018d0ee98a3ef524ea2c83739dc8c07b34a7298ffb1d288db659329"
	I1124 03:12:14.736458  656542 cri.go:89] found id: "dd990c6cdcef7e1e7305b9fc20b7615dfb761cbe8c5d42a1f61c8b41406cd0a7"
	I1124 03:12:14.736462  656542 cri.go:89] found id: "11357ba44da7473554ad2b8f1e58b742de06b9155b164b6c83a5d2f9beb7830e"
	I1124 03:12:14.736466  656542 cri.go:89] found id: ""
	I1124 03:12:14.736511  656542 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:12:14.754070  656542 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:14Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:14.754156  656542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:14.765200  656542 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:12:14.765224  656542 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:12:14.765273  656542 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:12:14.773243  656542 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:12:14.773947  656542 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993813" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:14.774328  656542 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993813" cluster setting kubeconfig missing "default-k8s-diff-port-993813" context setting]
	I1124 03:12:14.774925  656542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.776519  656542 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:12:14.785657  656542 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 03:12:14.785687  656542 kubeadm.go:602] duration metric: took 20.455875ms to restartPrimaryControlPlane
	I1124 03:12:14.785704  656542 kubeadm.go:403] duration metric: took 88.114399ms to StartCluster
	I1124 03:12:14.785722  656542 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.785796  656542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:14.786941  656542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.787180  656542 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:14.787429  656542 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:14.787487  656542 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:14.787568  656542 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.787584  656542 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.787592  656542 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:12:14.787615  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.788183  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.788464  656542 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.788516  656542 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993813"
	I1124 03:12:14.788466  656542 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.788738  656542 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.788750  656542 addons.go:248] addon dashboard should already be in state true
	I1124 03:12:14.788782  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.789431  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.789731  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.792034  656542 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:14.793166  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:14.820828  656542 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:12:14.821632  656542 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.821655  656542 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:12:14.821731  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.821909  656542 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:12:14.822084  656542 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:14.822112  656542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:14.822188  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.822548  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.827335  656542 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:12:13.173638  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:13.173665  657716 machine.go:97] duration metric: took 6.241868553s to provisionDockerMachine
	I1124 03:12:13.173679  657716 start.go:293] postStartSetup for "no-preload-603010" (driver="docker")
	I1124 03:12:13.173692  657716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:13.173754  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:13.173803  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.199819  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.311414  657716 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:13.316263  657716 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:13.316292  657716 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:13.316304  657716 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:13.316362  657716 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:13.316451  657716 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:13.316564  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:13.330333  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:13.349678  657716 start.go:296] duration metric: took 175.98281ms for postStartSetup
	I1124 03:12:13.349757  657716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:13.349803  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.372668  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.477580  657716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:13.483572  657716 fix.go:56] duration metric: took 6.891356705s for fixHost
	I1124 03:12:13.483602  657716 start.go:83] releasing machines lock for "no-preload-603010", held for 6.891418388s
	I1124 03:12:13.483679  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:13.509057  657716 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:13.509123  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.509169  657716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:13.509281  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.533830  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.535423  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.716640  657716 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:13.727633  657716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:13.784701  657716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:13.789877  657716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:13.789964  657716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:13.799956  657716 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:12:13.799989  657716 start.go:496] detecting cgroup driver to use...
	I1124 03:12:13.800021  657716 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:13.800080  657716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:13.821650  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:13.845364  657716 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:13.845437  657716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:13.876223  657716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:13.896810  657716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:14.018144  657716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:14.133192  657716 docker.go:234] disabling docker service ...
	I1124 03:12:14.133276  657716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:14.151812  657716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:14.167561  657716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:14.282838  657716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:14.401610  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:14.417930  657716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:14.437107  657716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:14.437170  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.449631  657716 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:14.449698  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.462463  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.477641  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.490417  657716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:14.504273  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.516484  657716 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.526509  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.538280  657716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:14.546998  657716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:14.555574  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:14.685636  657716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:14.944749  657716 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:14.944917  657716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:14.950036  657716 start.go:564] Will wait 60s for crictl version
	I1124 03:12:14.950115  657716 ssh_runner.go:195] Run: which crictl
	I1124 03:12:14.954328  657716 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:14.985292  657716 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:14.985374  657716 ssh_runner.go:195] Run: crio --version
	I1124 03:12:15.030503  657716 ssh_runner.go:195] Run: crio --version
	I1124 03:12:15.075694  657716 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:12:15.076822  657716 cli_runner.go:164] Run: docker network inspect no-preload-603010 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:15.102488  657716 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:15.108702  657716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:15.124431  657716 kubeadm.go:884] updating cluster {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:15.124588  657716 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:15.124636  657716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:15.167486  657716 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:15.167521  657716 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:15.167539  657716 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 03:12:15.167821  657716 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-603010 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:15.167925  657716 ssh_runner.go:195] Run: crio config
	I1124 03:12:15.235069  657716 cni.go:84] Creating CNI manager for ""
	I1124 03:12:15.235092  657716 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:15.235110  657716 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:15.235137  657716 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603010 NodeName:no-preload-603010 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:15.235315  657716 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603010"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:15.235402  657716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:15.246426  657716 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:15.246486  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:15.255073  657716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:12:15.274174  657716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:15.291964  657716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1124 03:12:15.310704  657716 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:15.315241  657716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:15.329049  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:15.444004  657716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:15.468249  657716 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010 for IP: 192.168.85.2
	I1124 03:12:15.468275  657716 certs.go:195] generating shared ca certs ...
	I1124 03:12:15.468303  657716 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:15.468461  657716 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:15.468527  657716 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:15.468545  657716 certs.go:257] generating profile certs ...
	I1124 03:12:15.468671  657716 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key
	I1124 03:12:15.468756  657716 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738
	I1124 03:12:15.468820  657716 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key
	I1124 03:12:15.469056  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:15.469155  657716 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:15.469190  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:15.469235  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:15.469307  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:15.469360  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:15.469452  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:15.470423  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:15.492954  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:15.516840  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:15.539720  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:15.572434  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:12:15.602383  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:15.627969  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:15.650700  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:12:15.671263  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:15.692710  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:15.715510  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:15.740163  657716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:15.756242  657716 ssh_runner.go:195] Run: openssl version
	I1124 03:12:15.764455  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:15.774930  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.779615  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.779675  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.837760  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:15.848860  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:15.859402  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.864242  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.864304  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.923088  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:15.933908  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:15.944242  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:15.949198  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:15.949248  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:16.007273  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:16.018117  657716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:16.023108  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:12:16.086212  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:12:16.144287  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:12:16.203439  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:12:16.267980  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:12:16.329154  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:12:16.391972  657716 kubeadm.go:401] StartCluster: {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:16.392083  657716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:16.392153  657716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:16.431895  657716 cri.go:89] found id: "3dc1c0625a30c329549c40b7e0c43d1ab4d818ccc0fb85ea668066af23a327d3"
	I1124 03:12:16.431924  657716 cri.go:89] found id: "4e8e84f339bed2139577a34b2e6715ab1a8d9a3b425a16886245d61781115cf0"
	I1124 03:12:16.431930  657716 cri.go:89] found id: "7294ac6825ca8afcd936803374aa461bcb4d637ea8743600784037c5acb225b2"
	I1124 03:12:16.431934  657716 cri.go:89] found id: "767a9908e75937581bb6f9fd527e760a401658cde3d3cf28bc2b66613eab1027"
	I1124 03:12:16.431938  657716 cri.go:89] found id: ""
	I1124 03:12:16.431989  657716 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:12:16.448469  657716 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:16Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:16.448636  657716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:16.460046  657716 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:12:16.460066  657716 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:12:16.460159  657716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:12:16.470578  657716 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:12:16.472039  657716 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-603010" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:16.472691  657716 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-603010" cluster setting kubeconfig missing "no-preload-603010" context setting]
	I1124 03:12:16.473827  657716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.476388  657716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:12:16.491280  657716 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 03:12:16.491307  657716 kubeadm.go:602] duration metric: took 31.234841ms to restartPrimaryControlPlane
	I1124 03:12:16.491317  657716 kubeadm.go:403] duration metric: took 99.357197ms to StartCluster
	I1124 03:12:16.491333  657716 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.491393  657716 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:16.492731  657716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.492990  657716 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:16.493291  657716 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:16.493352  657716 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:16.493441  657716 addons.go:70] Setting storage-provisioner=true in profile "no-preload-603010"
	I1124 03:12:16.493465  657716 addons.go:239] Setting addon storage-provisioner=true in "no-preload-603010"
	W1124 03:12:16.493473  657716 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:12:16.493503  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.494027  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.494266  657716 addons.go:70] Setting dashboard=true in profile "no-preload-603010"
	I1124 03:12:16.494322  657716 addons.go:239] Setting addon dashboard=true in "no-preload-603010"
	I1124 03:12:16.494338  657716 addons.go:70] Setting default-storageclass=true in profile "no-preload-603010"
	I1124 03:12:16.494434  657716 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603010"
	W1124 03:12:16.494361  657716 addons.go:248] addon dashboard should already be in state true
	I1124 03:12:16.494570  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.494863  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.495005  657716 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:16.495647  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.496468  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:16.527269  657716 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:12:16.528480  657716 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:12:16.528517  657716 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1124 03:12:14.168310  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:16.172923  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:18.176795  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:14.828319  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:12:14.828372  656542 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:12:14.828432  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.858092  656542 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:14.858118  656542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:14.858192  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.865650  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.866433  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.895242  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.975501  656542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:14.992389  656542 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:12:15.008151  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:15.016186  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:12:15.016211  656542 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:12:15.031574  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:15.042522  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:12:15.042540  656542 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:12:15.074331  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:12:15.074365  656542 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:12:15.109090  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:12:15.109113  656542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:12:15.128161  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:12:15.128184  656542 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:12:15.147874  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:12:15.147903  656542 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:12:15.168191  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:12:15.168211  656542 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:12:15.185637  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:12:15.185661  656542 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:12:15.202994  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:15.203016  656542 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:12:15.221608  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:17.996962  656542 node_ready.go:49] node "default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:17.997067  656542 node_ready.go:38] duration metric: took 3.004589581s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:12:17.997096  656542 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:17.997184  656542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:18.834613  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.826385361s)
	I1124 03:12:18.834690  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.803092411s)
	I1124 03:12:18.834853  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.613213665s)
	I1124 03:12:18.834988  656542 api_server.go:72] duration metric: took 4.047778988s to wait for apiserver process to appear ...
	I1124 03:12:18.835771  656542 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:18.835800  656542 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:12:18.838614  656542 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993813 addons enable metrics-server
	
	I1124 03:12:18.844882  656542 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 03:12:17.043130  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:12:17.043165  658811 ubuntu.go:182] provisioning hostname "embed-certs-284604"
	I1124 03:12:17.043247  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.069679  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:17.070109  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:17.070142  658811 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-284604 && echo "embed-certs-284604" | sudo tee /etc/hostname
	I1124 03:12:17.259114  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:12:17.259199  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.284082  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:17.284399  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:17.284433  658811 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-284604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-284604/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-284604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:17.452374  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:17.452411  658811 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:17.452438  658811 ubuntu.go:190] setting up certificates
	I1124 03:12:17.452452  658811 provision.go:84] configureAuth start
	I1124 03:12:17.452521  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:17.483434  658811 provision.go:143] copyHostCerts
	I1124 03:12:17.483502  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:17.483519  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:17.483580  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:17.483712  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:17.483725  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:17.483764  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:17.483851  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:17.483858  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:17.483909  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:17.483990  658811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.embed-certs-284604 san=[127.0.0.1 192.168.94.2 embed-certs-284604 localhost minikube]
	I1124 03:12:17.911206  658811 provision.go:177] copyRemoteCerts
	I1124 03:12:17.911335  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:17.911394  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.943914  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.069938  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:18.098447  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:12:18.124997  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:12:18.162531  658811 provision.go:87] duration metric: took 710.055135ms to configureAuth
	I1124 03:12:18.162560  658811 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:18.162764  658811 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:18.162877  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.187248  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:18.187553  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:18.187575  658811 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:18.557227  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:18.557257  658811 machine.go:97] duration metric: took 4.723983027s to provisionDockerMachine
	I1124 03:12:18.557270  658811 client.go:176] duration metric: took 9.315155053s to LocalClient.Create
	I1124 03:12:18.557286  658811 start.go:167] duration metric: took 9.315214435s to libmachine.API.Create "embed-certs-284604"
	I1124 03:12:18.557298  658811 start.go:293] postStartSetup for "embed-certs-284604" (driver="docker")
	I1124 03:12:18.557310  658811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:18.557379  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:18.557432  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.587404  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.715877  658811 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:18.721275  658811 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:18.721309  658811 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:18.721322  658811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:18.721381  658811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:18.721473  658811 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:18.721597  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:18.732645  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:18.763370  658811 start.go:296] duration metric: took 206.056597ms for postStartSetup
	I1124 03:12:18.763732  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:18.791899  658811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:12:18.792183  658811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:18.792233  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.820806  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.936530  658811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:18.948570  658811 start.go:128] duration metric: took 9.708372989s to createHost
	I1124 03:12:18.948686  658811 start.go:83] releasing machines lock for "embed-certs-284604", held for 9.708587492s
	I1124 03:12:18.948771  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:18.973190  658811 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:18.973375  658811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:18.973512  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.973582  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.998620  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.999698  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.845938  656542 addons.go:530] duration metric: took 4.058450553s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 03:12:18.846295  656542 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:18.846717  656542 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:12:19.335969  656542 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:12:19.342155  656542 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1124 03:12:19.343392  656542 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:19.343421  656542 api_server.go:131] duration metric: took 507.639836ms to wait for apiserver health ...
	I1124 03:12:19.343433  656542 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:19.347170  656542 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:19.347220  656542 system_pods.go:61] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:19.347233  656542 system_pods.go:61] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:19.347244  656542 system_pods.go:61] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:19.347253  656542 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:19.347263  656542 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:19.347271  656542 system_pods.go:61] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:19.347279  656542 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:19.347290  656542 system_pods.go:61] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:19.347300  656542 system_pods.go:74] duration metric: took 3.857291ms to wait for pod list to return data ...
	I1124 03:12:19.347309  656542 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:19.350005  656542 default_sa.go:45] found service account: "default"
	I1124 03:12:19.350027  656542 default_sa.go:55] duration metric: took 2.709767ms for default service account to be created ...
	I1124 03:12:19.350036  656542 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:19.354450  656542 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:19.354480  656542 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:19.354492  656542 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:19.354502  656542 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:19.354512  656542 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:19.354525  656542 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:19.354534  656542 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:19.354542  656542 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:19.354550  656542 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:19.354560  656542 system_pods.go:126] duration metric: took 4.516416ms to wait for k8s-apps to be running ...
	I1124 03:12:19.354569  656542 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:19.354617  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:19.377699  656542 system_svc.go:56] duration metric: took 23.119925ms WaitForService to wait for kubelet
	I1124 03:12:19.377726  656542 kubeadm.go:587] duration metric: took 4.590516557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:19.377808  656542 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:19.381785  656542 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:19.381815  656542 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:19.381831  656542 node_conditions.go:105] duration metric: took 4.017737ms to run NodePressure ...
	I1124 03:12:19.381846  656542 start.go:242] waiting for startup goroutines ...
	I1124 03:12:19.381857  656542 start.go:247] waiting for cluster config update ...
	I1124 03:12:19.381883  656542 start.go:256] writing updated cluster config ...
	I1124 03:12:19.382229  656542 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:19.387932  656542 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:19.394333  656542 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:16.529636  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:12:16.529826  657716 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:12:16.529877  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.529719  657716 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:16.530024  657716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:16.530070  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.534729  657716 addons.go:239] Setting addon default-storageclass=true in "no-preload-603010"
	W1124 03:12:16.534754  657716 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:12:16.534783  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.539339  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.565768  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.582397  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.585042  657716 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:16.585070  657716 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:16.585126  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.617946  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.706410  657716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:16.731745  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:12:16.731773  657716 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:12:16.736337  657716 node_ready.go:35] waiting up to 6m0s for node "no-preload-603010" to be "Ready" ...
	I1124 03:12:16.736937  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:16.758823  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:12:16.758847  657716 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:12:16.768684  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:16.788344  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:12:16.788369  657716 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:12:16.806593  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:12:16.806620  657716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:12:16.847576  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:12:16.847609  657716 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:12:16.867721  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:12:16.867755  657716 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:12:16.886765  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:12:16.886787  657716 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:12:16.907569  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:12:16.907732  657716 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:12:16.929396  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:16.929417  657716 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:12:16.958374  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:19.957067  657716 node_ready.go:49] node "no-preload-603010" is "Ready"
	I1124 03:12:19.957111  657716 node_ready.go:38] duration metric: took 3.220732108s for node "no-preload-603010" to be "Ready" ...
	I1124 03:12:19.957131  657716 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:19.957256  657716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:20.880814  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.143842388s)
	I1124 03:12:20.881241  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.112181993s)
	I1124 03:12:21.157660  657716 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.200376454s)
	I1124 03:12:21.157703  657716 api_server.go:72] duration metric: took 4.664681444s to wait for apiserver process to appear ...
	I1124 03:12:21.157713  657716 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:21.157733  657716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:12:21.158403  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.199980339s)
	I1124 03:12:21.160177  657716 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-603010 addons enable metrics-server
	
	I1124 03:12:21.161363  657716 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 03:12:19.120481  658811 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:19.211741  658811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:19.277394  658811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:19.284078  658811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:19.284149  658811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:19.319995  658811 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:12:19.320028  658811 start.go:496] detecting cgroup driver to use...
	I1124 03:12:19.320064  658811 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:19.320117  658811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:19.345823  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:19.367716  658811 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:19.367782  658811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:19.389799  658811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:19.412438  658811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:19.524730  658811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:19.637210  658811 docker.go:234] disabling docker service ...
	I1124 03:12:19.637286  658811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:19.659861  658811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:19.677152  658811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:19.823448  658811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:19.960707  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:19.981616  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:20.012418  658811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:20.012486  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.058077  658811 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:20.058214  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.074742  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.118587  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.135044  658811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:20.151861  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.172656  658811 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.194765  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.232792  658811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:20.242855  658811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:20.253417  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:20.371692  658811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:21.221343  658811 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:21.221440  658811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:21.226905  658811 start.go:564] Will wait 60s for crictl version
	I1124 03:12:21.227016  658811 ssh_runner.go:195] Run: which crictl
	I1124 03:12:21.231693  658811 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:21.262514  658811 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:21.262603  658811 ssh_runner.go:195] Run: crio --version
	I1124 03:12:21.302192  658811 ssh_runner.go:195] Run: crio --version
	I1124 03:12:21.363037  658811 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:12:21.162777  657716 addons.go:530] duration metric: took 4.669427095s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 03:12:21.163688  657716 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:21.163718  657716 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:20.668896  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:23.167980  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:21.364543  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:21.388019  658811 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:21.393290  658811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:21.406629  658811 kubeadm.go:884] updating cluster {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:21.406778  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:21.406846  658811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:21.445258  658811 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:21.445284  658811 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:12:21.445336  658811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:21.471000  658811 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:21.471025  658811 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:21.471037  658811 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:12:21.471125  658811 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-284604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:21.471186  658811 ssh_runner.go:195] Run: crio config
	I1124 03:12:21.516457  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:21.516480  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:21.516502  658811 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:21.516532  658811 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-284604 NodeName:embed-certs-284604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:21.516680  658811 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-284604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:21.516751  658811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:21.524967  658811 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:21.525035  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:21.533487  658811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 03:12:21.547228  658811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:21.640415  658811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 03:12:21.656434  658811 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:21.660696  658811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:21.674410  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:21.772584  658811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:21.798340  658811 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604 for IP: 192.168.94.2
	I1124 03:12:21.798360  658811 certs.go:195] generating shared ca certs ...
	I1124 03:12:21.798381  658811 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.798539  658811 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:21.798593  658811 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:21.798607  658811 certs.go:257] generating profile certs ...
	I1124 03:12:21.798690  658811 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key
	I1124 03:12:21.798708  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt with IP's: []
	I1124 03:12:21.837756  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt ...
	I1124 03:12:21.837790  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt: {Name:mk6d8aec213556beda470e3e5188eed1aec5e183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.838000  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key ...
	I1124 03:12:21.838030  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key: {Name:mk56f44e1d331f82a560e15fe6a3c3ca4602bba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.838172  658811 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087
	I1124 03:12:21.838189  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 03:12:21.915471  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 ...
	I1124 03:12:21.915494  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087: {Name:mk185605a13bb00cdff0decbde0063003287a88f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.915630  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087 ...
	I1124 03:12:21.915643  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087: {Name:mk1404f69a73d575873220c9d20779709c9db66b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.915715  658811 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt
	I1124 03:12:21.915784  658811 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key
	I1124 03:12:21.915837  658811 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key
	I1124 03:12:21.915852  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt with IP's: []
	I1124 03:12:22.064876  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt ...
	I1124 03:12:22.064923  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt: {Name:mk7bbfb718db4eee243d6b6658f5b6db725b34b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:22.065108  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key ...
	I1124 03:12:22.065140  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key: {Name:mk282c31a6bdbd1f185d5fa986bb6679f789f94a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:22.065488  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:22.065564  658811 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:22.065576  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:22.065602  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:22.065630  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:22.065654  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:22.065702  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:22.066383  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:22.086471  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:22.103602  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:22.120085  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:22.137488  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:12:22.154084  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:22.171055  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:22.187877  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:12:22.204407  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:22.222560  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:22.241380  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:22.258066  658811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:22.269950  658811 ssh_runner.go:195] Run: openssl version
	I1124 03:12:22.276120  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:22.283870  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.287375  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.287414  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.321400  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:22.329479  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:22.338113  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.342815  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.342865  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.384524  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:22.393408  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:22.402946  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.406951  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.407009  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.445501  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:22.454521  658811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:22.458152  658811 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:12:22.458212  658811 kubeadm.go:401] StartCluster: {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:22.458278  658811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:22.458330  658811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:22.487574  658811 cri.go:89] found id: ""
	I1124 03:12:22.487653  658811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:22.495876  658811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:12:22.505058  658811 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:12:22.505121  658811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:12:22.515162  658811 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:12:22.515181  658811 kubeadm.go:158] found existing configuration files:
	
	I1124 03:12:22.515229  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:12:22.525864  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:12:22.525956  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:12:22.535632  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:12:22.545975  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:12:22.546068  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:12:22.556144  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:12:22.566062  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:12:22.566123  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:12:22.576364  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:12:22.587041  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:12:22.587089  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:12:22.596656  658811 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:12:22.678370  658811 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:12:22.762592  658811 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1124 03:12:21.400229  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:23.400859  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:21.658606  657716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:12:21.664294  657716 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:12:21.665654  657716 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:21.665685  657716 api_server.go:131] duration metric: took 507.965368ms to wait for apiserver health ...
	I1124 03:12:21.665696  657716 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:21.669523  657716 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:21.669569  657716 system_pods.go:61] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:21.669584  657716 system_pods.go:61] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:21.669600  657716 system_pods.go:61] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:21.669613  657716 system_pods.go:61] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:21.669620  657716 system_pods.go:61] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:21.669631  657716 system_pods.go:61] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:21.669640  657716 system_pods.go:61] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:21.669651  657716 system_pods.go:61] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:21.669661  657716 system_pods.go:74] duration metric: took 3.958242ms to wait for pod list to return data ...
	I1124 03:12:21.669744  657716 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:21.672641  657716 default_sa.go:45] found service account: "default"
	I1124 03:12:21.672665  657716 default_sa.go:55] duration metric: took 2.912794ms for default service account to be created ...
	I1124 03:12:21.672674  657716 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:21.676337  657716 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:21.676367  657716 system_pods.go:89] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:21.676379  657716 system_pods.go:89] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:21.676394  657716 system_pods.go:89] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:21.676403  657716 system_pods.go:89] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:21.676411  657716 system_pods.go:89] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:21.676422  657716 system_pods.go:89] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:21.676433  657716 system_pods.go:89] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:21.676441  657716 system_pods.go:89] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:21.676450  657716 system_pods.go:126] duration metric: took 3.770261ms to wait for k8s-apps to be running ...
	I1124 03:12:21.676459  657716 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:21.676504  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:21.690659  657716 system_svc.go:56] duration metric: took 14.192089ms WaitForService to wait for kubelet
	I1124 03:12:21.690686  657716 kubeadm.go:587] duration metric: took 5.197662584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:21.690707  657716 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:21.693136  657716 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:21.693164  657716 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:21.693184  657716 node_conditions.go:105] duration metric: took 2.469957ms to run NodePressure ...
	I1124 03:12:21.693203  657716 start.go:242] waiting for startup goroutines ...
	I1124 03:12:21.693215  657716 start.go:247] waiting for cluster config update ...
	I1124 03:12:21.693239  657716 start.go:256] writing updated cluster config ...
	I1124 03:12:21.693532  657716 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:21.697901  657716 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:21.701025  657716 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:12:23.706826  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:25.707596  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:25.168947  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:27.669069  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:25.402048  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:27.901054  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:27.707794  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:29.710379  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:29.675678  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:32.166267  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:34.784594  658811 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:12:34.784648  658811 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:12:34.784736  658811 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:12:34.784810  658811 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:12:34.784870  658811 kubeadm.go:319] OS: Linux
	I1124 03:12:34.784983  658811 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:12:34.785059  658811 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:12:34.785107  658811 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:12:34.785166  658811 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:12:34.785237  658811 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:12:34.785303  658811 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:12:34.785372  658811 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:12:34.785441  658811 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:12:34.785518  658811 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:12:34.785647  658811 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:12:34.785738  658811 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:12:34.785806  658811 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:12:34.786978  658811 out.go:252]   - Generating certificates and keys ...
	I1124 03:12:34.787057  658811 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:12:34.787166  658811 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:12:34.787260  658811 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:12:34.787314  658811 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:12:34.787380  658811 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:12:34.787463  658811 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:12:34.787510  658811 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:12:34.787654  658811 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-284604 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:12:34.787713  658811 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:12:34.787835  658811 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-284604 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:12:34.787929  658811 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:12:34.787996  658811 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:12:34.788075  658811 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:12:34.788161  658811 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:12:34.788246  658811 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:12:34.788307  658811 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:12:34.788377  658811 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:12:34.788464  658811 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:12:34.788510  658811 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:12:34.788574  658811 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:12:34.788677  658811 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:12:34.789842  658811 out.go:252]   - Booting up control plane ...
	I1124 03:12:34.789955  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:12:34.790029  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:12:34.790102  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:12:34.790202  658811 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:12:34.790286  658811 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:12:34.790369  658811 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:12:34.790438  658811 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:12:34.790470  658811 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:12:34.790573  658811 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:12:34.790662  658811 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:12:34.790715  658811 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001939634s
	I1124 03:12:34.790808  658811 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:12:34.790874  658811 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1124 03:12:34.790987  658811 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:12:34.791057  658811 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:12:34.791109  658811 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.83516238s
	I1124 03:12:34.791172  658811 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.120221493s
	I1124 03:12:34.791231  658811 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501624476s
	I1124 03:12:34.791319  658811 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:12:34.791443  658811 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:12:34.791516  658811 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:12:34.791778  658811 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-284604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:12:34.791865  658811 kubeadm.go:319] [bootstrap-token] Using token: 6opk0j.95uwfc60sd8szhpc
	I1124 03:12:34.793026  658811 out.go:252]   - Configuring RBAC rules ...
	I1124 03:12:34.793125  658811 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:12:34.793213  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:12:34.793344  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:12:34.793455  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:12:34.793557  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:12:34.793642  658811 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:12:34.793774  658811 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:12:34.793810  658811 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:12:34.793851  658811 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:12:34.793857  658811 kubeadm.go:319] 
	I1124 03:12:34.793964  658811 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:12:34.793973  658811 kubeadm.go:319] 
	I1124 03:12:34.794046  658811 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:12:34.794053  658811 kubeadm.go:319] 
	I1124 03:12:34.794074  658811 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:12:34.794151  658811 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:12:34.794229  658811 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:12:34.794239  658811 kubeadm.go:319] 
	I1124 03:12:34.794318  658811 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:12:34.794327  658811 kubeadm.go:319] 
	I1124 03:12:34.794375  658811 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:12:34.794381  658811 kubeadm.go:319] 
	I1124 03:12:34.794424  658811 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:12:34.794490  658811 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:12:34.794554  658811 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:12:34.794560  658811 kubeadm.go:319] 
	I1124 03:12:34.794633  658811 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:12:34.794705  658811 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:12:34.794712  658811 kubeadm.go:319] 
	I1124 03:12:34.794781  658811 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6opk0j.95uwfc60sd8szhpc \
	I1124 03:12:34.794955  658811 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:12:34.794990  658811 kubeadm.go:319] 	--control-plane 
	I1124 03:12:34.794996  658811 kubeadm.go:319] 
	I1124 03:12:34.795133  658811 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:12:34.795142  658811 kubeadm.go:319] 
	I1124 03:12:34.795208  658811 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6opk0j.95uwfc60sd8szhpc \
	I1124 03:12:34.795304  658811 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:12:34.795316  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:34.795322  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:34.796503  658811 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1124 03:12:29.901574  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:32.399665  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:32.206353  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:34.206828  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:34.667383  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:35.167626  650744 pod_ready.go:94] pod "coredns-5dd5756b68-5nwx9" is "Ready"
	I1124 03:12:35.167652  650744 pod_ready.go:86] duration metric: took 36.006547637s for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.170471  650744 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.174915  650744 pod_ready.go:94] pod "etcd-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.174952  650744 pod_ready.go:86] duration metric: took 4.460425ms for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.178276  650744 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.181797  650744 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.181815  650744 pod_ready.go:86] duration metric: took 3.521385ms for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.184086  650744 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.364640  650744 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.364666  650744 pod_ready.go:86] duration metric: took 180.561055ms for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.566321  650744 pod_ready.go:83] waiting for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.965760  650744 pod_ready.go:94] pod "kube-proxy-r82jh" is "Ready"
	I1124 03:12:35.965786  650744 pod_ready.go:86] duration metric: took 399.441601ms for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.166112  650744 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.564858  650744 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-579951" is "Ready"
	I1124 03:12:36.564911  650744 pod_ready.go:86] duration metric: took 398.774389ms for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.564927  650744 pod_ready.go:40] duration metric: took 37.40842222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:36.606666  650744 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 03:12:36.609650  650744 out.go:203] 
	W1124 03:12:36.610839  650744 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 03:12:36.611943  650744 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:12:36.613009  650744 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-579951" cluster and "default" namespace by default
	I1124 03:12:34.797545  658811 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:12:34.801904  658811 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:12:34.801919  658811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:12:34.815659  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:12:35.008985  658811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:12:35.009118  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-284604 minikube.k8s.io/updated_at=2025_11_24T03_12_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=embed-certs-284604 minikube.k8s.io/primary=true
	I1124 03:12:35.009137  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:35.019423  658811 ops.go:34] apiserver oom_adj: -16
	I1124 03:12:35.098937  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:35.600025  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:36.099882  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:36.599914  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:37.099714  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:37.599861  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:38.098989  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:38.599248  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.099379  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.599598  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.664570  658811 kubeadm.go:1114] duration metric: took 4.655535544s to wait for elevateKubeSystemPrivileges
	I1124 03:12:39.664621  658811 kubeadm.go:403] duration metric: took 17.206413974s to StartCluster
	I1124 03:12:39.664642  658811 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:39.664720  658811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:39.666858  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:39.667137  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:12:39.667148  658811 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:39.667230  658811 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:39.667331  658811 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-284604"
	I1124 03:12:39.667356  658811 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-284604"
	I1124 03:12:39.667360  658811 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:39.667396  658811 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:12:39.667427  658811 addons.go:70] Setting default-storageclass=true in profile "embed-certs-284604"
	I1124 03:12:39.667451  658811 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-284604"
	I1124 03:12:39.667810  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.667990  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.668614  658811 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:39.670239  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:39.693324  658811 addons.go:239] Setting addon default-storageclass=true in "embed-certs-284604"
	I1124 03:12:39.693377  658811 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:12:39.693617  658811 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 03:12:34.900232  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:36.901987  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:39.399311  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:39.693843  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.695301  658811 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:39.695324  658811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:39.695401  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:39.723273  658811 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:39.723298  658811 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:39.723378  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:39.730678  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:39.746663  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:39.790082  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:12:39.807223  658811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:39.854663  658811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:39.859938  658811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:39.988561  658811 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 03:12:39.990213  658811 node_ready.go:35] waiting up to 6m0s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:12:40.170444  658811 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1124 03:12:36.707151  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:39.206261  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:41.206507  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	I1124 03:12:40.171595  658811 addons.go:530] duration metric: took 504.363947ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:12:40.492653  658811 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-284604" context rescaled to 1 replicas
	W1124 03:12:41.992667  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:43.993353  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:41.399566  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:43.899302  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:43.705614  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:45.706618  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:45.993493  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:47.993708  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:46.399440  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:48.399607  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 24 03:12:22 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:22.05833608Z" level=info msg="Started container" PID=1752 containerID=d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn/dashboard-metrics-scraper id=0f1d9264-a1dc-44af-a832-50ec6f2cad89 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e1d97e02735f8d8a4110cf0f3166803dab09205162c9400fdaa3b5f617ed4c73
	Nov 24 03:12:23 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:22.999882149Z" level=info msg="Removing container: 98ab232be61532c8216c25ac45b87b60ae9a5888ad784c700a95d30a80b1ca01" id=f92f4ab5-1ae3-46f5-9542-cf1040e4f325 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:12:23 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:23.01125384Z" level=info msg="Removed container 98ab232be61532c8216c25ac45b87b60ae9a5888ad784c700a95d30a80b1ca01: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn/dashboard-metrics-scraper" id=f92f4ab5-1ae3-46f5-9542-cf1040e4f325 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.021046515Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6c6e6fb6-37ab-4096-af30-11efd583ef2f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.0243332Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=58d306b0-c16a-4ee1-88d9-0edf7ad638bb name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.0257834Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ae7a0748-c804-4b7e-8b1d-69a4a4b55270 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.026047417Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.034987628Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.035235873Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/43aacfa3d0e968d79616b3b2f975a15475873bfc242f6247c78c0391e942a6be/merged/etc/passwd: no such file or directory"
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.03526472Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/43aacfa3d0e968d79616b3b2f975a15475873bfc242f6247c78c0391e942a6be/merged/etc/group: no such file or directory"
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.035551288Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.075325717Z" level=info msg="Created container cb140932ac86175b67bebb44bcf5349167921dded9f33f07bf73f2c99536262b: kube-system/storage-provisioner/storage-provisioner" id=ae7a0748-c804-4b7e-8b1d-69a4a4b55270 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.076561196Z" level=info msg="Starting container: cb140932ac86175b67bebb44bcf5349167921dded9f33f07bf73f2c99536262b" id=8c7ca3d3-b515-41ad-b3a0-18ebf49f11eb name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.080857664Z" level=info msg="Started container" PID=1766 containerID=cb140932ac86175b67bebb44bcf5349167921dded9f33f07bf73f2c99536262b description=kube-system/storage-provisioner/storage-provisioner id=8c7ca3d3-b515-41ad-b3a0-18ebf49f11eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=905c81ffe1ddece5ca63e1255676b07b649e31828dcebaca14ef8f7519923f87
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.887369216Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b9817382-fd98-4752-b149-e1369e1ba283 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.888227302Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b9a62157-9148-4e57-8e35-cce48968bf87 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.889034238Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn/dashboard-metrics-scraper" id=2f44bd97-218b-4556-ba68-fd07c07c8730 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.889165943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.895910546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.896423845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.921430676Z" level=info msg="Created container ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn/dashboard-metrics-scraper" id=2f44bd97-218b-4556-ba68-fd07c07c8730 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.92180592Z" level=info msg="Starting container: ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a" id=eeaaec46-8aeb-4aa4-9cdf-e058f94aae94 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.923547401Z" level=info msg="Started container" PID=1802 containerID=ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn/dashboard-metrics-scraper id=eeaaec46-8aeb-4aa4-9cdf-e058f94aae94 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e1d97e02735f8d8a4110cf0f3166803dab09205162c9400fdaa3b5f617ed4c73
	Nov 24 03:12:45 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:45.057279061Z" level=info msg="Removing container: d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab" id=6b071593-a11f-4331-a6b0-0b3eb218da12 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:12:45 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:45.066785843Z" level=info msg="Removed container d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn/dashboard-metrics-scraper" id=6b071593-a11f-4331-a6b0-0b3eb218da12 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	ea10d1278a0b1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   2                   e1d97e02735f8       dashboard-metrics-scraper-5f989dc9cf-lbkcn       kubernetes-dashboard
	cb140932ac861       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   905c81ffe1dde       storage-provisioner                              kube-system
	c0829291d94ab       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   33 seconds ago      Running             kubernetes-dashboard        0                   110f8c07eb8f1       kubernetes-dashboard-8694d4445c-8b2mk            kubernetes-dashboard
	4c225ee065df8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           52 seconds ago      Running             coredns                     0                   c390473ebd07c       coredns-5dd5756b68-5nwx9                         kube-system
	b2fb7244da7c5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   9832406d11eff       busybox                                          default
	ed35b2d105195       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   406b11cc9cfe2       kindnet-gdpzl                                    kube-system
	cbd2e7dfcfb37       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   905c81ffe1dde       storage-provisioner                              kube-system
	bbc5f27e635d1       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           52 seconds ago      Running             kube-proxy                  0                   6e9a5c6619824       kube-proxy-r82jh                                 kube-system
	cc8b5ee4851c9       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           56 seconds ago      Running             kube-apiserver              0                   951c4b0d2527d       kube-apiserver-old-k8s-version-579951            kube-system
	3176f2d8220ea       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           56 seconds ago      Running             kube-controller-manager     0                   5a6eba6528247       kube-controller-manager-old-k8s-version-579951   kube-system
	30d22d684ad75       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           56 seconds ago      Running             etcd                        0                   0e2ac993b09bd       etcd-old-k8s-version-579951                      kube-system
	3356da3bf9c82       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           56 seconds ago      Running             kube-scheduler              0                   299621d25923a       kube-scheduler-old-k8s-version-579951            kube-system
	
	
	==> coredns [4c225ee065df81dadf568669357be2b97899826cbcb60f9c3ac3b714637ac073] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60204 - 51660 "HINFO IN 4067648946028489573.5797369737411090544. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082919287s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-579951
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-579951
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=old-k8s-version-579951
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_10_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:10:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-579951
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:12:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:12:28 +0000   Mon, 24 Nov 2025 03:10:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:12:28 +0000   Mon, 24 Nov 2025 03:10:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:12:28 +0000   Mon, 24 Nov 2025 03:10:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:12:28 +0000   Mon, 24 Nov 2025 03:11:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-579951
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                5d61a30e-9821-4be7-b90f-0f413e931a19
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-5nwx9                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-old-k8s-version-579951                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-gdpzl                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-579951             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-579951    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-r82jh                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-579951             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-lbkcn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-8b2mk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m                 kubelet          Node old-k8s-version-579951 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                 kubelet          Node old-k8s-version-579951 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m                 kubelet          Node old-k8s-version-579951 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node old-k8s-version-579951 event: Registered Node old-k8s-version-579951 in Controller
	  Normal  NodeReady                94s                kubelet          Node old-k8s-version-579951 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node old-k8s-version-579951 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node old-k8s-version-579951 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node old-k8s-version-579951 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           40s                node-controller  Node old-k8s-version-579951 event: Registered Node old-k8s-version-579951 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [30d22d684ad7501e38080ff45bbe87f71a21252754ba692fc20125e3845f807a] <==
	{"level":"info","ts":"2025-11-24T03:12:11.242497Z","caller":"traceutil/trace.go:171","msg":"trace[1928993192] transaction","detail":"{read_only:false; response_revision:534; number_of_response:1; }","duration":"119.235453ms","start":"2025-11-24T03:12:11.123251Z","end":"2025-11-24T03:12:11.242486Z","steps":["trace[1928993192] 'process raft request'  (duration: 118.851095ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.4454Z","caller":"traceutil/trace.go:171","msg":"trace[2059375659] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"191.875542ms","start":"2025-11-24T03:12:11.253494Z","end":"2025-11-24T03:12:11.44537Z","steps":["trace[2059375659] 'process raft request'  (duration: 129.376271ms)","trace[2059375659] 'compare'  (duration: 62.274794ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:11.445424Z","caller":"traceutil/trace.go:171","msg":"trace[1401579851] transaction","detail":"{read_only:false; response_revision:542; number_of_response:1; }","duration":"137.820678ms","start":"2025-11-24T03:12:11.307589Z","end":"2025-11-24T03:12:11.445409Z","steps":["trace[1401579851] 'process raft request'  (duration: 137.774909ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.445464Z","caller":"traceutil/trace.go:171","msg":"trace[1394475637] transaction","detail":"{read_only:false; response_revision:541; number_of_response:1; }","duration":"188.804117ms","start":"2025-11-24T03:12:11.256653Z","end":"2025-11-24T03:12:11.445457Z","steps":["trace[1394475637] 'process raft request'  (duration: 188.651582ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.44549Z","caller":"traceutil/trace.go:171","msg":"trace[1132775598] transaction","detail":"{read_only:false; response_revision:540; number_of_response:1; }","duration":"191.88256ms","start":"2025-11-24T03:12:11.253602Z","end":"2025-11-24T03:12:11.445484Z","steps":["trace[1132775598] 'process raft request'  (duration: 191.67009ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.445483Z","caller":"traceutil/trace.go:171","msg":"trace[1462021919] linearizableReadLoop","detail":"{readStateIndex:568; appliedIndex:566; }","duration":"189.426898ms","start":"2025-11-24T03:12:11.256037Z","end":"2025-11-24T03:12:11.445464Z","steps":["trace[1462021919] 'read index received'  (duration: 46.454859ms)","trace[1462021919] 'applied index is now lower than readState.Index'  (duration: 142.970544ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T03:12:11.445565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.517167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" ","response":"range_response_count:1 size:4430"}
	{"level":"info","ts":"2025-11-24T03:12:11.446067Z","caller":"traceutil/trace.go:171","msg":"trace[1728946821] range","detail":"{range_begin:/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper; range_end:; response_count:1; response_revision:542; }","duration":"190.024276ms","start":"2025-11-24T03:12:11.256025Z","end":"2025-11-24T03:12:11.446049Z","steps":["trace[1728946821] 'agreement among raft nodes before linearized reading'  (duration: 189.486106ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.635558Z","caller":"traceutil/trace.go:171","msg":"trace[1035930019] transaction","detail":"{read_only:false; response_revision:551; number_of_response:1; }","duration":"174.433545ms","start":"2025-11-24T03:12:11.461088Z","end":"2025-11-24T03:12:11.635522Z","steps":["trace[1035930019] 'process raft request'  (duration: 87.83715ms)","trace[1035930019] 'compare'  (duration: 86.397039ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:11.635588Z","caller":"traceutil/trace.go:171","msg":"trace[1494662236] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"170.608451ms","start":"2025-11-24T03:12:11.464962Z","end":"2025-11-24T03:12:11.635571Z","steps":["trace[1494662236] 'process raft request'  (duration: 170.500066ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.635812Z","caller":"traceutil/trace.go:171","msg":"trace[1441150717] transaction","detail":"{read_only:false; response_revision:554; number_of_response:1; }","duration":"169.981505ms","start":"2025-11-24T03:12:11.465818Z","end":"2025-11-24T03:12:11.635799Z","steps":["trace[1441150717] 'process raft request'  (duration: 169.888533ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.635831Z","caller":"traceutil/trace.go:171","msg":"trace[1707450590] transaction","detail":"{read_only:false; response_revision:553; number_of_response:1; }","duration":"170.664537ms","start":"2025-11-24T03:12:11.465143Z","end":"2025-11-24T03:12:11.635807Z","steps":["trace[1707450590] 'process raft request'  (duration: 170.385783ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.805757Z","caller":"traceutil/trace.go:171","msg":"trace[437685081] linearizableReadLoop","detail":"{readStateIndex:584; appliedIndex:582; }","duration":"163.009648ms","start":"2025-11-24T03:12:11.64273Z","end":"2025-11-24T03:12:11.805739Z","steps":["trace[437685081] 'read index received'  (duration: 34.143399ms)","trace[437685081] 'applied index is now lower than readState.Index'  (duration: 128.865469ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:11.805781Z","caller":"traceutil/trace.go:171","msg":"trace[220475867] transaction","detail":"{read_only:false; response_revision:556; number_of_response:1; }","duration":"164.03594ms","start":"2025-11-24T03:12:11.641722Z","end":"2025-11-24T03:12:11.805758Z","steps":["trace[220475867] 'process raft request'  (duration: 117.312132ms)","trace[220475867] 'compare'  (duration: 46.544376ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:11.805831Z","caller":"traceutil/trace.go:171","msg":"trace[1577574396] transaction","detail":"{read_only:false; response_revision:557; number_of_response:1; }","duration":"162.314634ms","start":"2025-11-24T03:12:11.643501Z","end":"2025-11-24T03:12:11.805816Z","steps":["trace[1577574396] 'process raft request'  (duration: 162.190784ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:12:11.805918Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.158512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-8694d4445c\" ","response":"range_response_count:1 size:3138"}
	{"level":"info","ts":"2025-11-24T03:12:11.805957Z","caller":"traceutil/trace.go:171","msg":"trace[1986015274] range","detail":"{range_begin:/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-8694d4445c; range_end:; response_count:1; response_revision:557; }","duration":"163.231743ms","start":"2025-11-24T03:12:11.642713Z","end":"2025-11-24T03:12:11.805944Z","steps":["trace[1986015274] 'agreement among raft nodes before linearized reading'  (duration: 163.128265ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:12:11.805974Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.681958ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-5nwx9\" ","response":"range_response_count:1 size:4992"}
	{"level":"info","ts":"2025-11-24T03:12:11.806031Z","caller":"traceutil/trace.go:171","msg":"trace[274014986] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-5nwx9; range_end:; response_count:1; response_revision:557; }","duration":"142.745614ms","start":"2025-11-24T03:12:11.663275Z","end":"2025-11-24T03:12:11.80602Z","steps":["trace[274014986] 'agreement among raft nodes before linearized reading'  (duration: 142.656705ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:12.647702Z","caller":"traceutil/trace.go:171","msg":"trace[372732901] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"297.099858ms","start":"2025-11-24T03:12:12.350588Z","end":"2025-11-24T03:12:12.647688Z","steps":["trace[372732901] 'process raft request'  (duration: 297.006221ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:28.792303Z","caller":"traceutil/trace.go:171","msg":"trace[1974596828] linearizableReadLoop","detail":"{readStateIndex:623; appliedIndex:622; }","duration":"128.95868ms","start":"2025-11-24T03:12:28.663325Z","end":"2025-11-24T03:12:28.792284Z","steps":["trace[1974596828] 'read index received'  (duration: 128.868203ms)","trace[1974596828] 'applied index is now lower than readState.Index'  (duration: 89.618µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:28.792419Z","caller":"traceutil/trace.go:171","msg":"trace[1173164363] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"134.013278ms","start":"2025-11-24T03:12:28.658379Z","end":"2025-11-24T03:12:28.792392Z","steps":["trace[1173164363] 'process raft request'  (duration: 133.746689ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:12:28.792498Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.168972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-5nwx9\" ","response":"range_response_count:1 size:4992"}
	{"level":"info","ts":"2025-11-24T03:12:28.792535Z","caller":"traceutil/trace.go:171","msg":"trace[1597943492] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-5nwx9; range_end:; response_count:1; response_revision:590; }","duration":"129.231863ms","start":"2025-11-24T03:12:28.66329Z","end":"2025-11-24T03:12:28.792522Z","steps":["trace[1597943492] 'agreement among raft nodes before linearized reading'  (duration: 129.073023ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:28.819172Z","caller":"traceutil/trace.go:171","msg":"trace[328415600] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"145.87701ms","start":"2025-11-24T03:12:28.673282Z","end":"2025-11-24T03:12:28.819159Z","steps":["trace[328415600] 'process raft request'  (duration: 145.758687ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:12:51 up  1:55,  0 user,  load average: 5.46, 4.30, 2.74
	Linux old-k8s-version-579951 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ed35b2d1051955151fc61b8cc47f4e8d1bd605dd3fe30e9571e13bb8c6a72a2d] <==
	I1124 03:11:59.400660       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:11:59.400924       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 03:11:59.401114       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:11:59.401135       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:11:59.401163       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:11:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:11:59.698364       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:11:59.698437       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:11:59.698453       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:11:59.698630       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:11:59.898520       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:11:59.898575       1 metrics.go:72] Registering metrics
	I1124 03:11:59.898661       1 controller.go:711] "Syncing nftables rules"
	I1124 03:12:09.699106       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:12:09.699156       1 main.go:301] handling current node
	I1124 03:12:19.699021       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:12:19.699060       1 main.go:301] handling current node
	I1124 03:12:29.699065       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:12:29.699104       1 main.go:301] handling current node
	I1124 03:12:39.700248       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:12:39.700293       1 main.go:301] handling current node
	I1124 03:12:49.702554       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:12:49.702591       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cc8b5ee4851c9ae1241dd77995f3d1a2e725abb08136f47c106f5adf7f25f2a7] <==
	I1124 03:11:57.986735       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:11:58.003966       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 03:11:58.003999       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 03:11:58.004342       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 03:11:58.004385       1 shared_informer.go:318] Caches are synced for configmaps
	I1124 03:11:58.004840       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 03:11:58.006094       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 03:11:58.006418       1 aggregator.go:166] initial CRD sync complete...
	I1124 03:11:58.006442       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 03:11:58.006449       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 03:11:58.006458       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:11:58.007501       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1124 03:11:58.012545       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 03:11:58.908827       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:11:58.997530       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 03:11:59.028113       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 03:11:59.048840       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:11:59.055267       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:11:59.063029       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 03:11:59.101839       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.51.91"}
	I1124 03:11:59.113207       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.35.195"}
	I1124 03:12:10.894107       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:12:10.930596       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 03:12:11.029282       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 03:12:11.029282       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3176f2d8220eaa411e72fa77d582041c78e4d0b8acbd739cd01992ec3cfa7230] <==
	I1124 03:12:11.123058       1 shared_informer.go:318] Caches are synced for stateful set
	I1124 03:12:11.243804       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1124 03:12:11.303604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="257.087262ms"
	I1124 03:12:11.303787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.216µs"
	I1124 03:12:11.443085       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 03:12:11.446611       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 03:12:11.446642       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 03:12:11.447806       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-lbkcn"
	I1124 03:12:11.448445       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-8b2mk"
	I1124 03:12:11.460875       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="339.653085ms"
	I1124 03:12:11.461314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="218.762514ms"
	I1124 03:12:11.637441       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="176.484399ms"
	I1124 03:12:11.637550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="176.194963ms"
	I1124 03:12:11.637965       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="106.179µs"
	I1124 03:12:11.808204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="106.396µs"
	I1124 03:12:11.812463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="174.960918ms"
	I1124 03:12:11.812563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="58.675µs"
	I1124 03:12:18.034326       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.967331ms"
	I1124 03:12:18.034511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="84.81µs"
	I1124 03:12:22.007826       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.119µs"
	I1124 03:12:23.010750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.582µs"
	I1124 03:12:24.014463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.01µs"
	I1124 03:12:35.094478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.729619ms"
	I1124 03:12:35.094607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.416µs"
	I1124 03:12:45.067804       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.225µs"
	
	
	==> kube-proxy [bbc5f27e635d1171390cb9cc082c8e71358be7dd9d3966888be81466bec32466] <==
	I1124 03:11:59.290816       1 server_others.go:69] "Using iptables proxy"
	I1124 03:11:59.299166       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1124 03:11:59.316614       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:11:59.318808       1 server_others.go:152] "Using iptables Proxier"
	I1124 03:11:59.318830       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 03:11:59.318836       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 03:11:59.318866       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 03:11:59.319135       1 server.go:846] "Version info" version="v1.28.0"
	I1124 03:11:59.319157       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:11:59.320170       1 config.go:188] "Starting service config controller"
	I1124 03:11:59.320208       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 03:11:59.320246       1 config.go:97] "Starting endpoint slice config controller"
	I1124 03:11:59.320251       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 03:11:59.320486       1 config.go:315] "Starting node config controller"
	I1124 03:11:59.320510       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 03:11:59.421209       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1124 03:11:59.421234       1 shared_informer.go:318] Caches are synced for service config
	I1124 03:11:59.421308       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3356da3bf9c8232ed305911fa37644fd0513640f4477238b1a7e39b8e438c2a0] <==
	I1124 03:11:56.081101       1 serving.go:348] Generated self-signed cert in-memory
	W1124 03:11:57.959345       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 03:11:57.959377       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 03:11:57.959389       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 03:11:57.959399       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 03:11:57.987598       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1124 03:11:57.990944       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:11:57.995275       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1124 03:11:57.995390       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1124 03:11:57.995566       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:11:57.996109       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 03:11:58.096298       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 03:12:11 old-k8s-version-579951 kubelet[737]: I1124 03:12:11.457513     737 topology_manager.go:215] "Topology Admit Handler" podUID="36c6705a-eceb-43a7-9fce-96446385e0e3" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-8b2mk"
	Nov 24 03:12:11 old-k8s-version-579951 kubelet[737]: I1124 03:12:11.458492     737 topology_manager.go:215] "Topology Admit Handler" podUID="5096b231-1ea7-4e83-9132-f8255b42e564" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-lbkcn"
	Nov 24 03:12:11 old-k8s-version-579951 kubelet[737]: I1124 03:12:11.635061     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfxkr\" (UniqueName: \"kubernetes.io/projected/36c6705a-eceb-43a7-9fce-96446385e0e3-kube-api-access-nfxkr\") pod \"kubernetes-dashboard-8694d4445c-8b2mk\" (UID: \"36c6705a-eceb-43a7-9fce-96446385e0e3\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8b2mk"
	Nov 24 03:12:11 old-k8s-version-579951 kubelet[737]: I1124 03:12:11.635146     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/36c6705a-eceb-43a7-9fce-96446385e0e3-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-8b2mk\" (UID: \"36c6705a-eceb-43a7-9fce-96446385e0e3\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8b2mk"
	Nov 24 03:12:11 old-k8s-version-579951 kubelet[737]: I1124 03:12:11.635192     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5096b231-1ea7-4e83-9132-f8255b42e564-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-lbkcn\" (UID: \"5096b231-1ea7-4e83-9132-f8255b42e564\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn"
	Nov 24 03:12:11 old-k8s-version-579951 kubelet[737]: I1124 03:12:11.635259     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcf8n\" (UniqueName: \"kubernetes.io/projected/5096b231-1ea7-4e83-9132-f8255b42e564-kube-api-access-gcf8n\") pod \"dashboard-metrics-scraper-5f989dc9cf-lbkcn\" (UID: \"5096b231-1ea7-4e83-9132-f8255b42e564\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn"
	Nov 24 03:12:21 old-k8s-version-579951 kubelet[737]: I1124 03:12:21.991527     737 scope.go:117] "RemoveContainer" containerID="98ab232be61532c8216c25ac45b87b60ae9a5888ad784c700a95d30a80b1ca01"
	Nov 24 03:12:22 old-k8s-version-579951 kubelet[737]: I1124 03:12:22.008860     737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8b2mk" podStartSLOduration=5.852248983 podCreationTimestamp="2025-11-24 03:12:11 +0000 UTC" firstStartedPulling="2025-11-24 03:12:12.347825025 +0000 UTC m=+17.556511071" lastFinishedPulling="2025-11-24 03:12:17.503019624 +0000 UTC m=+22.711705680" observedRunningTime="2025-11-24 03:12:18.018594918 +0000 UTC m=+23.227280976" watchObservedRunningTime="2025-11-24 03:12:22.007443592 +0000 UTC m=+27.216129648"
	Nov 24 03:12:22 old-k8s-version-579951 kubelet[737]: I1124 03:12:22.996930     737 scope.go:117] "RemoveContainer" containerID="98ab232be61532c8216c25ac45b87b60ae9a5888ad784c700a95d30a80b1ca01"
	Nov 24 03:12:22 old-k8s-version-579951 kubelet[737]: I1124 03:12:22.997253     737 scope.go:117] "RemoveContainer" containerID="d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab"
	Nov 24 03:12:22 old-k8s-version-579951 kubelet[737]: E1124 03:12:22.997639     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbkcn_kubernetes-dashboard(5096b231-1ea7-4e83-9132-f8255b42e564)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn" podUID="5096b231-1ea7-4e83-9132-f8255b42e564"
	Nov 24 03:12:24 old-k8s-version-579951 kubelet[737]: I1124 03:12:24.001362     737 scope.go:117] "RemoveContainer" containerID="d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab"
	Nov 24 03:12:24 old-k8s-version-579951 kubelet[737]: E1124 03:12:24.001773     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbkcn_kubernetes-dashboard(5096b231-1ea7-4e83-9132-f8255b42e564)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn" podUID="5096b231-1ea7-4e83-9132-f8255b42e564"
	Nov 24 03:12:30 old-k8s-version-579951 kubelet[737]: I1124 03:12:30.019981     737 scope.go:117] "RemoveContainer" containerID="cbd2e7dfcfb37a19af31d60fb1906fc2f2ff1f04f8b5e0b378efbf444e50673f"
	Nov 24 03:12:32 old-k8s-version-579951 kubelet[737]: I1124 03:12:32.061272     737 scope.go:117] "RemoveContainer" containerID="d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab"
	Nov 24 03:12:32 old-k8s-version-579951 kubelet[737]: E1124 03:12:32.061670     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbkcn_kubernetes-dashboard(5096b231-1ea7-4e83-9132-f8255b42e564)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn" podUID="5096b231-1ea7-4e83-9132-f8255b42e564"
	Nov 24 03:12:44 old-k8s-version-579951 kubelet[737]: I1124 03:12:44.886794     737 scope.go:117] "RemoveContainer" containerID="d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab"
	Nov 24 03:12:44 old-k8s-version-579951 kubelet[737]: E1124 03:12:44.946679     737 cadvisor_stats_provider.go:444] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/crio-ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a.scope/container\": RecentStats: unable to find data in memory cache], [\"/system.slice/crio-ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a.scope\": RecentStats: unable to find data in memory cache]"
	Nov 24 03:12:45 old-k8s-version-579951 kubelet[737]: I1124 03:12:45.056132     737 scope.go:117] "RemoveContainer" containerID="d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab"
	Nov 24 03:12:45 old-k8s-version-579951 kubelet[737]: I1124 03:12:45.056366     737 scope.go:117] "RemoveContainer" containerID="ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a"
	Nov 24 03:12:45 old-k8s-version-579951 kubelet[737]: E1124 03:12:45.056738     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbkcn_kubernetes-dashboard(5096b231-1ea7-4e83-9132-f8255b42e564)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn" podUID="5096b231-1ea7-4e83-9132-f8255b42e564"
	Nov 24 03:12:48 old-k8s-version-579951 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 03:12:48 old-k8s-version-579951 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 03:12:48 old-k8s-version-579951 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 03:12:48 old-k8s-version-579951 systemd[1]: kubelet.service: Consumed 1.454s CPU time.
	
	
	==> kubernetes-dashboard [c0829291d94ab54222f0c979e770045678982177db6d180fb2f94c79be1258de] <==
	2025/11/24 03:12:17 Using namespace: kubernetes-dashboard
	2025/11/24 03:12:17 Using in-cluster config to connect to apiserver
	2025/11/24 03:12:17 Using secret token for csrf signing
	2025/11/24 03:12:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 03:12:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 03:12:17 Successful initial request to the apiserver, version: v1.28.0
	2025/11/24 03:12:17 Generating JWE encryption key
	2025/11/24 03:12:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 03:12:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 03:12:17 Initializing JWE encryption key from synchronized object
	2025/11/24 03:12:17 Creating in-cluster Sidecar client
	2025/11/24 03:12:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 03:12:17 Serving insecurely on HTTP port: 9090
	2025/11/24 03:12:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 03:12:17 Starting overwatch
	
	
	==> storage-provisioner [cb140932ac86175b67bebb44bcf5349167921dded9f33f07bf73f2c99536262b] <==
	I1124 03:12:30.107261       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:12:30.123681       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:12:30.123797       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 03:12:47.516892       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:12:47.517027       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-579951_30a07f27-8ce9-4d33-aff4-87779858de0d!
	I1124 03:12:47.517013       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"59a77692-accc-462a-ac9b-8cd00bada505", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-579951_30a07f27-8ce9-4d33-aff4-87779858de0d became leader
	I1124 03:12:47.617657       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-579951_30a07f27-8ce9-4d33-aff4-87779858de0d!
	
	
	==> storage-provisioner [cbd2e7dfcfb37a19af31d60fb1906fc2f2ff1f04f8b5e0b378efbf444e50673f] <==
	I1124 03:11:59.261535       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 03:12:29.263216       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-579951 -n old-k8s-version-579951
E1124 03:12:51.942683  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/auto-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-579951 -n old-k8s-version-579951: exit status 2 (335.904391ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-579951 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-579951
helpers_test.go:243: (dbg) docker inspect old-k8s-version-579951:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7",
	        "Created": "2025-11-24T03:10:32.99838887Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 650944,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:11:48.56517649Z",
	            "FinishedAt": "2025-11-24T03:11:47.732440396Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7/hostname",
	        "HostsPath": "/var/lib/docker/containers/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7/hosts",
	        "LogPath": "/var/lib/docker/containers/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7/3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7-json.log",
	        "Name": "/old-k8s-version-579951",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-579951:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-579951",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3f9d9080b81a219c436ef1617d3fa7e38c8b5c7209291df5bd637c018f51c7a7",
	                "LowerDir": "/var/lib/docker/overlay2/7475b95118b29264700554efe3a336409ac4e9cbb1a3e2671ad217c583d5e887-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7475b95118b29264700554efe3a336409ac4e9cbb1a3e2671ad217c583d5e887/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7475b95118b29264700554efe3a336409ac4e9cbb1a3e2671ad217c583d5e887/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7475b95118b29264700554efe3a336409ac4e9cbb1a3e2671ad217c583d5e887/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-579951",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-579951/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-579951",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-579951",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-579951",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "011105d85338a845940181cca52916d7f23b363fc02f1d2de87d5e91349bd4a9",
	            "SandboxKey": "/var/run/docker/netns/011105d85338",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33478"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33479"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33482"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33480"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33481"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-579951": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ca041b7f18e6d1ec0481cbe24b969048a40ddf73308219ebc68c053037d8a9f",
	                    "EndpointID": "4a24fac5c1f388d38c08fbcc81dc86ef83be1d89ce8a662edd66ef26dffb3bcc",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "86:aa:bc:3c:36:eb",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-579951",
	                        "3f9d9080b81a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-579951 -n old-k8s-version-579951
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-579951 -n old-k8s-version-579951: exit status 2 (338.73027ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-579951 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-579951 logs -n 25: (1.09151934s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:10 UTC │ 24 Nov 25 03:11 UTC │
	│ addons  │ enable metrics-server -p newest-cni-438041 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-579951 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p newest-cni-438041 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ stop    │ -p old-k8s-version-579951 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-603010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993813 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ stop    │ -p no-preload-603010 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-579951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p old-k8s-version-579951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-438041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ image   │ newest-cni-438041 image list --format=json                                                                                                                                                                                                    │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p newest-cni-438041 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-603010 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                                                                                          │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p no-preload-603010 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ -p newest-cni-438041                                                                                                                                                                                                                          │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p disable-driver-mounts-242597                                                                                                                                                                                                               │ disable-driver-mounts-242597 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ image   │ old-k8s-version-579951 image list --format=json                                                                                                                                                                                               │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p old-k8s-version-579951 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:12:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:12:09.055015  658811 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:12:09.055230  658811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:09.055247  658811 out.go:374] Setting ErrFile to fd 2...
	I1124 03:12:09.055253  658811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:09.055468  658811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:12:09.055909  658811 out.go:368] Setting JSON to false
	I1124 03:12:09.056956  658811 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6876,"bootTime":1763947053,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:12:09.057009  658811 start.go:143] virtualization: kvm guest
	I1124 03:12:09.058671  658811 out.go:179] * [embed-certs-284604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:12:09.059850  658811 notify.go:221] Checking for updates...
	I1124 03:12:09.059855  658811 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:12:09.061128  658811 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:12:09.062317  658811 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:09.063358  658811 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:12:09.064255  658811 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:12:09.065078  658811 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:12:09.066407  658811 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.066509  658811 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.066589  658811 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:12:09.066666  658811 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:12:09.089713  658811 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:12:09.089855  658811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:09.145948  658811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:12:09.135562124 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:09.146071  658811 docker.go:319] overlay module found
	I1124 03:12:09.147708  658811 out.go:179] * Using the docker driver based on user configuration
	I1124 03:12:09.148714  658811 start.go:309] selected driver: docker
	I1124 03:12:09.148737  658811 start.go:927] validating driver "docker" against <nil>
	I1124 03:12:09.148747  658811 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:12:09.149338  658811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:09.210343  658811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:12:09.200351707 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:09.210534  658811 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:12:09.210794  658811 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:09.212381  658811 out.go:179] * Using Docker driver with root privileges
	I1124 03:12:09.213398  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:09.213482  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:09.213497  658811 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:12:09.213574  658811 start.go:353] cluster config:
	{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:09.214730  658811 out.go:179] * Starting "embed-certs-284604" primary control-plane node in "embed-certs-284604" cluster
	I1124 03:12:09.215613  658811 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:12:09.216663  658811 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:12:09.217654  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:09.217694  658811 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:12:09.217703  658811 cache.go:65] Caching tarball of preloaded images
	I1124 03:12:09.217732  658811 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:12:09.217791  658811 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:12:09.217808  658811 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:12:09.217977  658811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:12:09.218021  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json: {Name:mkd4898576ebe0ebf6d2ca35fddd33eac8f127df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:09.239944  658811 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:12:09.239962  658811 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:12:09.239976  658811 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:12:09.240004  658811 start.go:360] acquireMachinesLock for embed-certs-284604: {Name:mkd39be5908e1d289ed5af40b6c2b1c510beffd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:12:09.240088  658811 start.go:364] duration metric: took 68.665µs to acquireMachinesLock for "embed-certs-284604"
	I1124 03:12:09.240109  658811 start.go:93] Provisioning new machine with config: &{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:09.240182  658811 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:12:05.014758  656542 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-993813" ...
	I1124 03:12:05.014805  656542 cli_runner.go:164] Run: docker start default-k8s-diff-port-993813
	I1124 03:12:05.297424  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:05.316835  656542 kic.go:430] container "default-k8s-diff-port-993813" state is running.
	I1124 03:12:05.317309  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:05.336690  656542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/config.json ...
	I1124 03:12:05.336923  656542 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:05.336992  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:05.356564  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:05.356863  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:05.356907  656542 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:05.357642  656542 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39256->127.0.0.1:33488: read: connection reset by peer
	I1124 03:12:08.497704  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:12:08.497744  656542 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-993813"
	I1124 03:12:08.497799  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:08.516284  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:08.516620  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:08.516642  656542 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993813 && echo "default-k8s-diff-port-993813" | sudo tee /etc/hostname
	I1124 03:12:08.664299  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:12:08.664399  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:08.683215  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:08.683424  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:08.683440  656542 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993813/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:08.824495  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:08.824534  656542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:08.824571  656542 ubuntu.go:190] setting up certificates
	I1124 03:12:08.824597  656542 provision.go:84] configureAuth start
	I1124 03:12:08.824659  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:08.842592  656542 provision.go:143] copyHostCerts
	I1124 03:12:08.842639  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:08.842651  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:08.842701  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:08.842805  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:08.842813  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:08.842838  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:08.842940  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:08.842950  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:08.842981  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:08.843051  656542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993813 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-993813 localhost minikube]
	I1124 03:12:08.993088  656542 provision.go:177] copyRemoteCerts
	I1124 03:12:08.993141  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:08.993180  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.010481  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.112610  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:09.134182  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 03:12:09.153393  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:12:09.173516  656542 provision.go:87] duration metric: took 348.902104ms to configureAuth
	I1124 03:12:09.173547  656542 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:09.173717  656542 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.173820  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.195519  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:09.195738  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:09.195756  656542 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:09.551404  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:09.551434  656542 machine.go:97] duration metric: took 4.214494542s to provisionDockerMachine
	I1124 03:12:09.551449  656542 start.go:293] postStartSetup for "default-k8s-diff-port-993813" (driver="docker")
	I1124 03:12:09.551463  656542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:09.551533  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:09.551574  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.572440  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.684044  656542 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:09.688328  656542 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:09.688354  656542 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:09.688365  656542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:09.688414  656542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:09.688488  656542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:09.688660  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:09.696023  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:09.725715  656542 start.go:296] duration metric: took 174.248037ms for postStartSetup
	I1124 03:12:09.725795  656542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:09.725851  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.747235  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:06.610202  657716 out.go:252] * Restarting existing docker container for "no-preload-603010" ...
	I1124 03:12:06.610267  657716 cli_runner.go:164] Run: docker start no-preload-603010
	I1124 03:12:06.895418  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:06.913279  657716 kic.go:430] container "no-preload-603010" state is running.
	I1124 03:12:06.913694  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:06.931543  657716 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/config.json ...
	I1124 03:12:06.931779  657716 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:06.931840  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:06.949180  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:06.949422  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:06.949436  657716 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:06.950106  657716 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53738->127.0.0.1:33493: read: connection reset by peer
	I1124 03:12:10.094410  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-603010
	
	I1124 03:12:10.094455  657716 ubuntu.go:182] provisioning hostname "no-preload-603010"
	I1124 03:12:10.094548  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.117277  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.117614  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.117637  657716 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-603010 && echo "no-preload-603010" | sudo tee /etc/hostname
	I1124 03:12:10.272082  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-603010
	
	I1124 03:12:10.272162  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.293197  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.293525  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.293557  657716 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-603010' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-603010/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-603010' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:10.440289  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:10.440322  657716 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:10.440350  657716 ubuntu.go:190] setting up certificates
	I1124 03:12:10.440374  657716 provision.go:84] configureAuth start
	I1124 03:12:10.440443  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:10.458672  657716 provision.go:143] copyHostCerts
	I1124 03:12:10.458743  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:10.458766  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:10.458857  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:10.459021  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:10.459037  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:10.459080  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:10.459183  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:10.459195  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:10.459232  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:10.459323  657716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.no-preload-603010 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-603010]
	I1124 03:12:10.546420  657716 provision.go:177] copyRemoteCerts
	I1124 03:12:10.546503  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:10.546552  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.564799  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:10.669343  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:10.687953  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:12:10.707320  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:12:10.728398  657716 provision.go:87] duration metric: took 288.002675ms to configureAuth
	I1124 03:12:10.728450  657716 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:10.728791  657716 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:10.728992  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.754544  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.754857  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.754907  657716 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:09.846210  656542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:09.851045  656542 fix.go:56] duration metric: took 4.853815531s for fixHost
	I1124 03:12:09.851067  656542 start.go:83] releasing machines lock for "default-k8s-diff-port-993813", held for 4.853861223s
	I1124 03:12:09.851139  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:09.871679  656542 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:09.871744  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.871767  656542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:09.871859  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.897665  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.897832  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.996390  656542 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:10.070447  656542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:10.108350  656542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:10.113659  656542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:10.113732  656542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:10.122258  656542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:12:10.122274  656542 start.go:496] detecting cgroup driver to use...
	I1124 03:12:10.122301  656542 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:10.122333  656542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:10.138420  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:10.151623  656542 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:10.151696  656542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:10.169717  656542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:10.185403  656542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:10.268937  656542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:10.361626  656542 docker.go:234] disabling docker service ...
	I1124 03:12:10.361713  656542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:10.376259  656542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:10.389709  656542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:10.493317  656542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:10.581163  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:10.594309  656542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:10.608489  656542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:10.608559  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.618090  656542 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:10.618147  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.629142  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.639755  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.648289  656542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:10.657390  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.667835  656542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.677148  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.686554  656542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:10.694262  656542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:10.701983  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:10.784645  656542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:13.176259  656542 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.391580237s)
	I1124 03:12:13.176297  656542 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:13.176344  656542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:13.182771  656542 start.go:564] Will wait 60s for crictl version
	I1124 03:12:13.182920  656542 ssh_runner.go:195] Run: which crictl
	I1124 03:12:13.188282  656542 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:13.221129  656542 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:13.221208  656542 ssh_runner.go:195] Run: crio --version
	I1124 03:12:13.256022  656542 ssh_runner.go:195] Run: crio --version
	I1124 03:12:13.289098  656542 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1124 03:12:09.667322  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:11.810684  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:09.241811  658811 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:12:09.242074  658811 start.go:159] libmachine.API.Create for "embed-certs-284604" (driver="docker")
	I1124 03:12:09.242107  658811 client.go:173] LocalClient.Create starting
	I1124 03:12:09.242186  658811 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 03:12:09.242224  658811 main.go:143] libmachine: Decoding PEM data...
	I1124 03:12:09.242246  658811 main.go:143] libmachine: Parsing certificate...
	I1124 03:12:09.242326  658811 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 03:12:09.242354  658811 main.go:143] libmachine: Decoding PEM data...
	I1124 03:12:09.242374  658811 main.go:143] libmachine: Parsing certificate...
	I1124 03:12:09.242824  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:12:09.259427  658811 cli_runner.go:211] docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:12:09.259477  658811 network_create.go:284] running [docker network inspect embed-certs-284604] to gather additional debugging logs...
	I1124 03:12:09.259492  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604
	W1124 03:12:09.275004  658811 cli_runner.go:211] docker network inspect embed-certs-284604 returned with exit code 1
	I1124 03:12:09.275029  658811 network_create.go:287] error running [docker network inspect embed-certs-284604]: docker network inspect embed-certs-284604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-284604 not found
	I1124 03:12:09.275039  658811 network_create.go:289] output of [docker network inspect embed-certs-284604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-284604 not found
	
	** /stderr **
	I1124 03:12:09.275132  658811 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:09.292074  658811 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-58688175ab6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:9d:6a:fc:5c:13} reservation:<nil>}
	I1124 03:12:09.292745  658811 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf033568456f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:35:42:16:23:28} reservation:<nil>}
	I1124 03:12:09.293207  658811 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ecb12099844 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ce:07:19:c6:91:6e} reservation:<nil>}
	I1124 03:12:09.293801  658811 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-50b2e4e61586 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:0d:88:19:7d:df} reservation:<nil>}
	I1124 03:12:09.294406  658811 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6fb41680caed IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:a0:6c:44:95:b2} reservation:<nil>}
	I1124 03:12:09.295273  658811 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eef7f0}
	I1124 03:12:09.295296  658811 network_create.go:124] attempt to create docker network embed-certs-284604 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 03:12:09.295333  658811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-284604 embed-certs-284604
	I1124 03:12:09.341016  658811 network_create.go:108] docker network embed-certs-284604 192.168.94.0/24 created
	I1124 03:12:09.341044  658811 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-284604" container
	I1124 03:12:09.341097  658811 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:12:09.358710  658811 cli_runner.go:164] Run: docker volume create embed-certs-284604 --label name.minikube.sigs.k8s.io=embed-certs-284604 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:12:09.377491  658811 oci.go:103] Successfully created a docker volume embed-certs-284604
	I1124 03:12:09.377565  658811 cli_runner.go:164] Run: docker run --rm --name embed-certs-284604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-284604 --entrypoint /usr/bin/test -v embed-certs-284604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:12:09.757637  658811 oci.go:107] Successfully prepared a docker volume embed-certs-284604
	I1124 03:12:09.757726  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:09.757742  658811 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:12:09.757816  658811 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-284604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:12:13.055592  658811 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-284604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (3.297719307s)
	I1124 03:12:13.055632  658811 kic.go:203] duration metric: took 3.29788472s to extract preloaded images to volume ...
	W1124 03:12:13.055721  658811 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:12:13.055758  658811 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:12:13.055810  658811 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:12:13.124836  658811 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-284604 --name embed-certs-284604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-284604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-284604 --network embed-certs-284604 --ip 192.168.94.2 --volume embed-certs-284604:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:12:13.468642  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Running}}
	I1124 03:12:13.493010  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.520114  658811 cli_runner.go:164] Run: docker exec embed-certs-284604 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:12:13.579438  658811 oci.go:144] the created container "embed-certs-284604" has a running status.
	I1124 03:12:13.579473  658811 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa...
	I1124 03:12:13.686392  658811 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:12:13.719014  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.744934  658811 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:12:13.744979  658811 kic_runner.go:114] Args: [docker exec --privileged embed-certs-284604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:12:13.804379  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.833184  658811 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:13.833391  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:13.865266  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:13.865635  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:13.865670  658811 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:13.866448  658811 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55158->127.0.0.1:33498: read: connection reset by peer
	I1124 03:12:13.290552  656542 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:13.314170  656542 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:13.318716  656542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:13.333300  656542 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:13.333436  656542 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:13.333523  656542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:13.375001  656542 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:13.375027  656542 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:12:13.375078  656542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:13.407152  656542 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:13.407180  656542 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:13.407190  656542 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 03:12:13.407342  656542 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:13.407444  656542 ssh_runner.go:195] Run: crio config
	I1124 03:12:13.468159  656542 cni.go:84] Creating CNI manager for ""
	I1124 03:12:13.468191  656542 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:13.468220  656542 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:13.468251  656542 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993813 NodeName:default-k8s-diff-port-993813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:13.468425  656542 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993813"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:13.468485  656542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:13.480922  656542 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:13.480989  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:13.491437  656542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 03:12:13.510538  656542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:13.531599  656542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 03:12:13.550625  656542 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:13.557123  656542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:13.570105  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:13.687069  656542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:13.711246  656542 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813 for IP: 192.168.76.2
	I1124 03:12:13.711268  656542 certs.go:195] generating shared ca certs ...
	I1124 03:12:13.711287  656542 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:13.711456  656542 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:13.711513  656542 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:13.711526  656542 certs.go:257] generating profile certs ...
	I1124 03:12:13.711642  656542 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key
	I1124 03:12:13.711706  656542 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619
	I1124 03:12:13.711753  656542 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key
	I1124 03:12:13.711996  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:13.712051  656542 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:13.712065  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:13.712101  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:13.712139  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:13.712175  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:13.712240  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:13.712851  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:13.744604  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:13.773924  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:13.797454  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:13.831783  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 03:12:13.870484  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:13.900124  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:13.922822  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:12:13.948171  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:13.977351  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:14.003032  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:14.029032  656542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:14.044929  656542 ssh_runner.go:195] Run: openssl version
	I1124 03:12:14.055102  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:14.069569  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.074149  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.074206  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.129455  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:14.139467  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:14.150460  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.155547  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.155598  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.213122  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:14.224488  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:14.235043  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.239741  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.239796  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.296275  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:14.307247  656542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:14.315784  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:12:14.374911  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:12:14.452037  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:12:14.514532  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:12:14.577046  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:12:14.634822  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:12:14.697600  656542 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:14.697704  656542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:14.697759  656542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:14.736428  656542 cri.go:89] found id: "9d08a55f25f2dea3825f91aa9365cd9a3093b4582eda50f03b22c2ecc00f92c6"
	I1124 03:12:14.736451  656542 cri.go:89] found id: "a7d5f73dd018d0ee98a3ef524ea2c83739dc8c07b34a7298ffb1d288db659329"
	I1124 03:12:14.736458  656542 cri.go:89] found id: "dd990c6cdcef7e1e7305b9fc20b7615dfb761cbe8c5d42a1f61c8b41406cd0a7"
	I1124 03:12:14.736462  656542 cri.go:89] found id: "11357ba44da7473554ad2b8f1e58b742de06b9155b164b6c83a5d2f9beb7830e"
	I1124 03:12:14.736466  656542 cri.go:89] found id: ""
	I1124 03:12:14.736511  656542 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:12:14.754070  656542 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:14Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:14.754156  656542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:14.765200  656542 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:12:14.765224  656542 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:12:14.765273  656542 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:12:14.773243  656542 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:12:14.773947  656542 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993813" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:14.774328  656542 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993813" cluster setting kubeconfig missing "default-k8s-diff-port-993813" context setting]
	I1124 03:12:14.774925  656542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.776519  656542 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:12:14.785657  656542 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 03:12:14.785687  656542 kubeadm.go:602] duration metric: took 20.455875ms to restartPrimaryControlPlane
	I1124 03:12:14.785704  656542 kubeadm.go:403] duration metric: took 88.114399ms to StartCluster
	I1124 03:12:14.785722  656542 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.785796  656542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:14.786941  656542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.787180  656542 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:14.787429  656542 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:14.787487  656542 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:14.787568  656542 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.787584  656542 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.787592  656542 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:12:14.787615  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.788183  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.788464  656542 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.788516  656542 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993813"
	I1124 03:12:14.788466  656542 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.788738  656542 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.788750  656542 addons.go:248] addon dashboard should already be in state true
	I1124 03:12:14.788782  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.789431  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.789731  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.792034  656542 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:14.793166  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:14.820828  656542 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:12:14.821632  656542 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.821655  656542 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:12:14.821731  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.821909  656542 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:12:14.822084  656542 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:14.822112  656542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:14.822188  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.822548  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.827335  656542 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:12:13.173638  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:13.173665  657716 machine.go:97] duration metric: took 6.241868553s to provisionDockerMachine
	I1124 03:12:13.173679  657716 start.go:293] postStartSetup for "no-preload-603010" (driver="docker")
	I1124 03:12:13.173692  657716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:13.173754  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:13.173803  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.199819  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.311414  657716 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:13.316263  657716 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:13.316292  657716 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:13.316304  657716 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:13.316362  657716 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:13.316451  657716 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:13.316564  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:13.330333  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:13.349678  657716 start.go:296] duration metric: took 175.98281ms for postStartSetup
	I1124 03:12:13.349757  657716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:13.349803  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.372668  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.477580  657716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:13.483572  657716 fix.go:56] duration metric: took 6.891356705s for fixHost
	I1124 03:12:13.483602  657716 start.go:83] releasing machines lock for "no-preload-603010", held for 6.891418388s
	I1124 03:12:13.483679  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:13.509057  657716 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:13.509123  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.509169  657716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:13.509281  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.533830  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.535423  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.716640  657716 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:13.727633  657716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:13.784701  657716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:13.789877  657716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:13.789964  657716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:13.799956  657716 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:12:13.799989  657716 start.go:496] detecting cgroup driver to use...
	I1124 03:12:13.800021  657716 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:13.800080  657716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:13.821650  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:13.845364  657716 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:13.845437  657716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:13.876223  657716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:13.896810  657716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:14.018144  657716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:14.133192  657716 docker.go:234] disabling docker service ...
	I1124 03:12:14.133276  657716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:14.151812  657716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:14.167561  657716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:14.282838  657716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:14.401610  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:14.417930  657716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:14.437107  657716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:14.437170  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.449631  657716 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:14.449698  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.462463  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.477641  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.490417  657716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:14.504273  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.516484  657716 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.526509  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.538280  657716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:14.546998  657716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:14.555574  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:14.685636  657716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:14.944749  657716 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:14.944917  657716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:14.950036  657716 start.go:564] Will wait 60s for crictl version
	I1124 03:12:14.950115  657716 ssh_runner.go:195] Run: which crictl
	I1124 03:12:14.954328  657716 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:14.985292  657716 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:14.985374  657716 ssh_runner.go:195] Run: crio --version
	I1124 03:12:15.030503  657716 ssh_runner.go:195] Run: crio --version
	I1124 03:12:15.075694  657716 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:12:15.076822  657716 cli_runner.go:164] Run: docker network inspect no-preload-603010 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:15.102488  657716 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:15.108702  657716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:15.124431  657716 kubeadm.go:884] updating cluster {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:15.124588  657716 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:15.124636  657716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:15.167486  657716 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:15.167521  657716 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:15.167539  657716 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 03:12:15.167821  657716 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-603010 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:15.167925  657716 ssh_runner.go:195] Run: crio config
	I1124 03:12:15.235069  657716 cni.go:84] Creating CNI manager for ""
	I1124 03:12:15.235092  657716 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:15.235110  657716 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:15.235137  657716 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603010 NodeName:no-preload-603010 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:15.235315  657716 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603010"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:15.235402  657716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:15.246426  657716 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:15.246486  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:15.255073  657716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:12:15.274174  657716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:15.291964  657716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1124 03:12:15.310704  657716 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:15.315241  657716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:15.329049  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:15.444004  657716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:15.468249  657716 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010 for IP: 192.168.85.2
	I1124 03:12:15.468275  657716 certs.go:195] generating shared ca certs ...
	I1124 03:12:15.468303  657716 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:15.468461  657716 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:15.468527  657716 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:15.468545  657716 certs.go:257] generating profile certs ...
	I1124 03:12:15.468671  657716 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key
	I1124 03:12:15.468756  657716 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738
	I1124 03:12:15.468820  657716 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key
	I1124 03:12:15.469056  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:15.469155  657716 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:15.469190  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:15.469235  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:15.469307  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:15.469360  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:15.469452  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:15.470423  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:15.492954  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:15.516840  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:15.539720  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:15.572434  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:12:15.602383  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:15.627969  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:15.650700  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:12:15.671263  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:15.692710  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:15.715510  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:15.740163  657716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:15.756242  657716 ssh_runner.go:195] Run: openssl version
	I1124 03:12:15.764455  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:15.774930  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.779615  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.779675  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.837760  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:15.848860  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:15.859402  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.864242  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.864304  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.923088  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:15.933908  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:15.944242  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:15.949198  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:15.949248  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:16.007273  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:16.018117  657716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:16.023108  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:12:16.086212  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:12:16.144287  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:12:16.203439  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:12:16.267980  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:12:16.329154  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:12:16.391972  657716 kubeadm.go:401] StartCluster: {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:16.392083  657716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:16.392153  657716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:16.431895  657716 cri.go:89] found id: "3dc1c0625a30c329549c40b7e0c43d1ab4d818ccc0fb85ea668066af23a327d3"
	I1124 03:12:16.431924  657716 cri.go:89] found id: "4e8e84f339bed2139577a34b2e6715ab1a8d9a3b425a16886245d61781115cf0"
	I1124 03:12:16.431930  657716 cri.go:89] found id: "7294ac6825ca8afcd936803374aa461bcb4d637ea8743600784037c5acb225b2"
	I1124 03:12:16.431934  657716 cri.go:89] found id: "767a9908e75937581bb6f9fd527e760a401658cde3d3cf28bc2b66613eab1027"
	I1124 03:12:16.431938  657716 cri.go:89] found id: ""
	I1124 03:12:16.431989  657716 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:12:16.448469  657716 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:16Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:16.448636  657716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:16.460046  657716 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:12:16.460066  657716 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:12:16.460159  657716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:12:16.470578  657716 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:12:16.472039  657716 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-603010" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:16.472691  657716 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-603010" cluster setting kubeconfig missing "no-preload-603010" context setting]
	I1124 03:12:16.473827  657716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.476388  657716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:12:16.491280  657716 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 03:12:16.491307  657716 kubeadm.go:602] duration metric: took 31.234841ms to restartPrimaryControlPlane
	I1124 03:12:16.491317  657716 kubeadm.go:403] duration metric: took 99.357197ms to StartCluster
	I1124 03:12:16.491333  657716 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.491393  657716 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:16.492731  657716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.492990  657716 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:16.493291  657716 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:16.493352  657716 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:16.493441  657716 addons.go:70] Setting storage-provisioner=true in profile "no-preload-603010"
	I1124 03:12:16.493465  657716 addons.go:239] Setting addon storage-provisioner=true in "no-preload-603010"
	W1124 03:12:16.493473  657716 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:12:16.493503  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.494027  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.494266  657716 addons.go:70] Setting dashboard=true in profile "no-preload-603010"
	I1124 03:12:16.494322  657716 addons.go:239] Setting addon dashboard=true in "no-preload-603010"
	I1124 03:12:16.494338  657716 addons.go:70] Setting default-storageclass=true in profile "no-preload-603010"
	I1124 03:12:16.494434  657716 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603010"
	W1124 03:12:16.494361  657716 addons.go:248] addon dashboard should already be in state true
	I1124 03:12:16.494570  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.494863  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.495005  657716 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:16.495647  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.496468  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:16.527269  657716 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:12:16.528480  657716 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:12:16.528517  657716 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1124 03:12:14.168310  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:16.172923  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:18.176795  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:14.828319  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:12:14.828372  656542 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:12:14.828432  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.858092  656542 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:14.858118  656542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:14.858192  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.865650  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.866433  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.895242  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.975501  656542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:14.992389  656542 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:12:15.008151  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:15.016186  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:12:15.016211  656542 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:12:15.031574  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:15.042522  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:12:15.042540  656542 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:12:15.074331  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:12:15.074365  656542 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:12:15.109090  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:12:15.109113  656542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:12:15.128161  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:12:15.128184  656542 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:12:15.147874  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:12:15.147903  656542 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:12:15.168191  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:12:15.168211  656542 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:12:15.185637  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:12:15.185661  656542 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:12:15.202994  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:15.203016  656542 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:12:15.221608  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:17.996962  656542 node_ready.go:49] node "default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:17.997067  656542 node_ready.go:38] duration metric: took 3.004589581s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:12:17.997096  656542 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:17.997184  656542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:18.834613  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.826385361s)
	I1124 03:12:18.834690  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.803092411s)
	I1124 03:12:18.834853  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.613213665s)
	I1124 03:12:18.834988  656542 api_server.go:72] duration metric: took 4.047778988s to wait for apiserver process to appear ...
	I1124 03:12:18.835771  656542 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:18.835800  656542 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:12:18.838614  656542 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993813 addons enable metrics-server
	
	I1124 03:12:18.844882  656542 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 03:12:17.043130  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:12:17.043165  658811 ubuntu.go:182] provisioning hostname "embed-certs-284604"
	I1124 03:12:17.043247  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.069679  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:17.070109  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:17.070142  658811 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-284604 && echo "embed-certs-284604" | sudo tee /etc/hostname
	I1124 03:12:17.259114  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:12:17.259199  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.284082  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:17.284399  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:17.284433  658811 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-284604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-284604/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-284604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:17.452374  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:17.452411  658811 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:17.452438  658811 ubuntu.go:190] setting up certificates
	I1124 03:12:17.452452  658811 provision.go:84] configureAuth start
	I1124 03:12:17.452521  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:17.483434  658811 provision.go:143] copyHostCerts
	I1124 03:12:17.483502  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:17.483519  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:17.483580  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:17.483712  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:17.483725  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:17.483764  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:17.483851  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:17.483858  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:17.483909  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:17.483990  658811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.embed-certs-284604 san=[127.0.0.1 192.168.94.2 embed-certs-284604 localhost minikube]
	I1124 03:12:17.911206  658811 provision.go:177] copyRemoteCerts
	I1124 03:12:17.911335  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:17.911394  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.943914  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.069938  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:18.098447  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:12:18.124997  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:12:18.162531  658811 provision.go:87] duration metric: took 710.055135ms to configureAuth
	I1124 03:12:18.162560  658811 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:18.162764  658811 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:18.162877  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.187248  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:18.187553  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:18.187575  658811 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:18.557227  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:18.557257  658811 machine.go:97] duration metric: took 4.723983027s to provisionDockerMachine
	I1124 03:12:18.557270  658811 client.go:176] duration metric: took 9.315155053s to LocalClient.Create
	I1124 03:12:18.557286  658811 start.go:167] duration metric: took 9.315214435s to libmachine.API.Create "embed-certs-284604"
	I1124 03:12:18.557298  658811 start.go:293] postStartSetup for "embed-certs-284604" (driver="docker")
	I1124 03:12:18.557310  658811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:18.557379  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:18.557432  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.587404  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.715877  658811 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:18.721275  658811 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:18.721309  658811 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:18.721322  658811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:18.721381  658811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:18.721473  658811 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:18.721597  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:18.732645  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:18.763370  658811 start.go:296] duration metric: took 206.056597ms for postStartSetup
	I1124 03:12:18.763732  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:18.791899  658811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:12:18.792183  658811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:18.792233  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.820806  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.936530  658811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:18.948570  658811 start.go:128] duration metric: took 9.708372989s to createHost
	I1124 03:12:18.948686  658811 start.go:83] releasing machines lock for "embed-certs-284604", held for 9.708587492s
	I1124 03:12:18.948771  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:18.973190  658811 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:18.973375  658811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:18.973512  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.973582  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.998620  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.999698  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.845938  656542 addons.go:530] duration metric: took 4.058450553s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 03:12:18.846295  656542 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:18.846717  656542 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:12:19.335969  656542 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:12:19.342155  656542 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1124 03:12:19.343392  656542 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:19.343421  656542 api_server.go:131] duration metric: took 507.639836ms to wait for apiserver health ...
	I1124 03:12:19.343433  656542 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:19.347170  656542 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:19.347220  656542 system_pods.go:61] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:19.347233  656542 system_pods.go:61] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:19.347244  656542 system_pods.go:61] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:19.347253  656542 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:19.347263  656542 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:19.347271  656542 system_pods.go:61] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:19.347279  656542 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:19.347290  656542 system_pods.go:61] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:19.347300  656542 system_pods.go:74] duration metric: took 3.857291ms to wait for pod list to return data ...
	I1124 03:12:19.347309  656542 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:19.350005  656542 default_sa.go:45] found service account: "default"
	I1124 03:12:19.350027  656542 default_sa.go:55] duration metric: took 2.709767ms for default service account to be created ...
	I1124 03:12:19.350036  656542 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:19.354450  656542 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:19.354480  656542 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:19.354492  656542 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:19.354502  656542 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:19.354512  656542 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:19.354525  656542 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:19.354534  656542 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:19.354542  656542 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:19.354550  656542 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:19.354560  656542 system_pods.go:126] duration metric: took 4.516416ms to wait for k8s-apps to be running ...
	I1124 03:12:19.354569  656542 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:19.354617  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:19.377699  656542 system_svc.go:56] duration metric: took 23.119925ms WaitForService to wait for kubelet
	I1124 03:12:19.377726  656542 kubeadm.go:587] duration metric: took 4.590516557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:19.377808  656542 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:19.381785  656542 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:19.381815  656542 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:19.381831  656542 node_conditions.go:105] duration metric: took 4.017737ms to run NodePressure ...
	I1124 03:12:19.381846  656542 start.go:242] waiting for startup goroutines ...
	I1124 03:12:19.381857  656542 start.go:247] waiting for cluster config update ...
	I1124 03:12:19.381883  656542 start.go:256] writing updated cluster config ...
	I1124 03:12:19.382229  656542 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:19.387932  656542 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:19.394333  656542 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:16.529636  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:12:16.529826  657716 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:12:16.529877  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.529719  657716 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:16.530024  657716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:16.530070  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.534729  657716 addons.go:239] Setting addon default-storageclass=true in "no-preload-603010"
	W1124 03:12:16.534754  657716 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:12:16.534783  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.539339  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.565768  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.582397  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.585042  657716 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:16.585070  657716 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:16.585126  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.617946  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.706410  657716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:16.731745  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:12:16.731773  657716 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:12:16.736337  657716 node_ready.go:35] waiting up to 6m0s for node "no-preload-603010" to be "Ready" ...
	I1124 03:12:16.736937  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:16.758823  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:12:16.758847  657716 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:12:16.768684  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:16.788344  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:12:16.788369  657716 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:12:16.806593  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:12:16.806620  657716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:12:16.847576  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:12:16.847609  657716 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:12:16.867721  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:12:16.867755  657716 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:12:16.886765  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:12:16.886787  657716 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:12:16.907569  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:12:16.907732  657716 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:12:16.929396  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:16.929417  657716 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:12:16.958374  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:19.957067  657716 node_ready.go:49] node "no-preload-603010" is "Ready"
	I1124 03:12:19.957111  657716 node_ready.go:38] duration metric: took 3.220732108s for node "no-preload-603010" to be "Ready" ...
	I1124 03:12:19.957131  657716 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:19.957256  657716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:20.880814  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.143842388s)
	I1124 03:12:20.881241  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.112181993s)
	I1124 03:12:21.157660  657716 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.200376454s)
	I1124 03:12:21.157703  657716 api_server.go:72] duration metric: took 4.664681444s to wait for apiserver process to appear ...
	I1124 03:12:21.157713  657716 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:21.157733  657716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:12:21.158403  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.199980339s)
	I1124 03:12:21.160177  657716 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-603010 addons enable metrics-server
	
	I1124 03:12:21.161363  657716 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 03:12:19.120481  658811 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:19.211741  658811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:19.277394  658811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:19.284078  658811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:19.284149  658811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:19.319995  658811 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:12:19.320028  658811 start.go:496] detecting cgroup driver to use...
	I1124 03:12:19.320064  658811 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:19.320117  658811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:19.345823  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:19.367716  658811 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:19.367782  658811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:19.389799  658811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:19.412438  658811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:19.524730  658811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:19.637210  658811 docker.go:234] disabling docker service ...
	I1124 03:12:19.637286  658811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:19.659861  658811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:19.677152  658811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:19.823448  658811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:19.960707  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:19.981616  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:20.012418  658811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:20.012486  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.058077  658811 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:20.058214  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.074742  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.118587  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.135044  658811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:20.151861  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.172656  658811 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.194765  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.232792  658811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:20.242855  658811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:20.253417  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:20.371692  658811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:21.221343  658811 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:21.221440  658811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:21.226905  658811 start.go:564] Will wait 60s for crictl version
	I1124 03:12:21.227016  658811 ssh_runner.go:195] Run: which crictl
	I1124 03:12:21.231693  658811 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:21.262514  658811 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:21.262603  658811 ssh_runner.go:195] Run: crio --version
	I1124 03:12:21.302192  658811 ssh_runner.go:195] Run: crio --version
	I1124 03:12:21.363037  658811 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:12:21.162777  657716 addons.go:530] duration metric: took 4.669427095s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 03:12:21.163688  657716 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:21.163718  657716 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:20.668896  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:23.167980  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:21.364543  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:21.388019  658811 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:21.393290  658811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:21.406629  658811 kubeadm.go:884] updating cluster {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:21.406778  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:21.406846  658811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:21.445258  658811 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:21.445284  658811 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:12:21.445336  658811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:21.471000  658811 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:21.471025  658811 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:21.471037  658811 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:12:21.471125  658811 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-284604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:21.471186  658811 ssh_runner.go:195] Run: crio config
	I1124 03:12:21.516457  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:21.516480  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:21.516502  658811 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:21.516532  658811 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-284604 NodeName:embed-certs-284604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:21.516680  658811 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-284604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:21.516751  658811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:21.524967  658811 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:21.525035  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:21.533487  658811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 03:12:21.547228  658811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:21.640415  658811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 03:12:21.656434  658811 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:21.660696  658811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:21.674410  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:21.772584  658811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:21.798340  658811 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604 for IP: 192.168.94.2
	I1124 03:12:21.798360  658811 certs.go:195] generating shared ca certs ...
	I1124 03:12:21.798381  658811 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.798539  658811 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:21.798593  658811 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:21.798607  658811 certs.go:257] generating profile certs ...
	I1124 03:12:21.798690  658811 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key
	I1124 03:12:21.798708  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt with IP's: []
	I1124 03:12:21.837756  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt ...
	I1124 03:12:21.837790  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt: {Name:mk6d8aec213556beda470e3e5188eed1aec5e183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.838000  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key ...
	I1124 03:12:21.838030  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key: {Name:mk56f44e1d331f82a560e15fe6a3c3ca4602bba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.838172  658811 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087
	I1124 03:12:21.838189  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 03:12:21.915471  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 ...
	I1124 03:12:21.915494  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087: {Name:mk185605a13bb00cdff0decbde0063003287a88f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.915630  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087 ...
	I1124 03:12:21.915643  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087: {Name:mk1404f69a73d575873220c9d20779709c9db66b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.915715  658811 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt
	I1124 03:12:21.915784  658811 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key
	I1124 03:12:21.915837  658811 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key
	I1124 03:12:21.915852  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt with IP's: []
	I1124 03:12:22.064876  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt ...
	I1124 03:12:22.064923  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt: {Name:mk7bbfb718db4eee243d6b6658f5b6db725b34b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:22.065108  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key ...
	I1124 03:12:22.065140  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key: {Name:mk282c31a6bdbd1f185d5fa986bb6679f789f94a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:22.065488  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:22.065564  658811 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:22.065576  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:22.065602  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:22.065630  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:22.065654  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:22.065702  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:22.066383  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:22.086471  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:22.103602  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:22.120085  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:22.137488  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:12:22.154084  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:22.171055  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:22.187877  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:12:22.204407  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:22.222560  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:22.241380  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:22.258066  658811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:22.269950  658811 ssh_runner.go:195] Run: openssl version
	I1124 03:12:22.276120  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:22.283870  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.287375  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.287414  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.321400  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:22.329479  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:22.338113  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.342815  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.342865  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.384524  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:22.393408  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:22.402946  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.406951  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.407009  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.445501  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:22.454521  658811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:22.458152  658811 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:12:22.458212  658811 kubeadm.go:401] StartCluster: {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:22.458278  658811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:22.458330  658811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:22.487574  658811 cri.go:89] found id: ""
	I1124 03:12:22.487653  658811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:22.495876  658811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:12:22.505058  658811 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:12:22.505121  658811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:12:22.515162  658811 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:12:22.515181  658811 kubeadm.go:158] found existing configuration files:
	
	I1124 03:12:22.515229  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:12:22.525864  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:12:22.525956  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:12:22.535632  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:12:22.545975  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:12:22.546068  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:12:22.556144  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:12:22.566062  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:12:22.566123  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:12:22.576364  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:12:22.587041  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:12:22.587089  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:12:22.596656  658811 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:12:22.678370  658811 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:12:22.762592  658811 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1124 03:12:21.400229  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:23.400859  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:21.658606  657716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:12:21.664294  657716 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:12:21.665654  657716 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:21.665685  657716 api_server.go:131] duration metric: took 507.965368ms to wait for apiserver health ...
	I1124 03:12:21.665696  657716 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:21.669523  657716 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:21.669569  657716 system_pods.go:61] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:21.669584  657716 system_pods.go:61] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:21.669600  657716 system_pods.go:61] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:21.669613  657716 system_pods.go:61] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:21.669620  657716 system_pods.go:61] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:21.669631  657716 system_pods.go:61] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:21.669640  657716 system_pods.go:61] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:21.669651  657716 system_pods.go:61] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:21.669661  657716 system_pods.go:74] duration metric: took 3.958242ms to wait for pod list to return data ...
	I1124 03:12:21.669744  657716 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:21.672641  657716 default_sa.go:45] found service account: "default"
	I1124 03:12:21.672665  657716 default_sa.go:55] duration metric: took 2.912794ms for default service account to be created ...
	I1124 03:12:21.672674  657716 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:21.676337  657716 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:21.676367  657716 system_pods.go:89] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:21.676379  657716 system_pods.go:89] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:21.676394  657716 system_pods.go:89] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:21.676403  657716 system_pods.go:89] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:21.676411  657716 system_pods.go:89] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:21.676422  657716 system_pods.go:89] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:21.676433  657716 system_pods.go:89] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:21.676441  657716 system_pods.go:89] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:21.676450  657716 system_pods.go:126] duration metric: took 3.770261ms to wait for k8s-apps to be running ...
	I1124 03:12:21.676459  657716 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:21.676504  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:21.690659  657716 system_svc.go:56] duration metric: took 14.192089ms WaitForService to wait for kubelet
	I1124 03:12:21.690686  657716 kubeadm.go:587] duration metric: took 5.197662584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:21.690707  657716 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:21.693136  657716 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:21.693164  657716 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:21.693184  657716 node_conditions.go:105] duration metric: took 2.469957ms to run NodePressure ...
	I1124 03:12:21.693203  657716 start.go:242] waiting for startup goroutines ...
	I1124 03:12:21.693215  657716 start.go:247] waiting for cluster config update ...
	I1124 03:12:21.693239  657716 start.go:256] writing updated cluster config ...
	I1124 03:12:21.693532  657716 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:21.697901  657716 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:21.701025  657716 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:12:23.706826  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:25.707596  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:25.168947  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:27.669069  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:25.402048  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:27.901054  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:27.707794  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:29.710379  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:29.675678  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:32.166267  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:34.784594  658811 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:12:34.784648  658811 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:12:34.784736  658811 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:12:34.784810  658811 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:12:34.784870  658811 kubeadm.go:319] OS: Linux
	I1124 03:12:34.784983  658811 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:12:34.785059  658811 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:12:34.785107  658811 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:12:34.785166  658811 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:12:34.785237  658811 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:12:34.785303  658811 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:12:34.785372  658811 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:12:34.785441  658811 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:12:34.785518  658811 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:12:34.785647  658811 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:12:34.785738  658811 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:12:34.785806  658811 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:12:34.786978  658811 out.go:252]   - Generating certificates and keys ...
	I1124 03:12:34.787057  658811 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:12:34.787166  658811 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:12:34.787260  658811 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:12:34.787314  658811 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:12:34.787380  658811 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:12:34.787463  658811 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:12:34.787510  658811 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:12:34.787654  658811 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-284604 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:12:34.787713  658811 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:12:34.787835  658811 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-284604 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:12:34.787929  658811 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:12:34.787996  658811 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:12:34.788075  658811 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:12:34.788161  658811 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:12:34.788246  658811 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:12:34.788307  658811 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:12:34.788377  658811 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:12:34.788464  658811 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:12:34.788510  658811 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:12:34.788574  658811 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:12:34.788677  658811 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:12:34.789842  658811 out.go:252]   - Booting up control plane ...
	I1124 03:12:34.789955  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:12:34.790029  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:12:34.790102  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:12:34.790202  658811 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:12:34.790286  658811 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:12:34.790369  658811 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:12:34.790438  658811 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:12:34.790470  658811 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:12:34.790573  658811 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:12:34.790662  658811 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:12:34.790715  658811 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001939634s
	I1124 03:12:34.790808  658811 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:12:34.790874  658811 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1124 03:12:34.790987  658811 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:12:34.791057  658811 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:12:34.791109  658811 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.83516238s
	I1124 03:12:34.791172  658811 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.120221493s
	I1124 03:12:34.791231  658811 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501624476s
	I1124 03:12:34.791319  658811 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:12:34.791443  658811 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:12:34.791516  658811 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:12:34.791778  658811 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-284604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:12:34.791865  658811 kubeadm.go:319] [bootstrap-token] Using token: 6opk0j.95uwfc60sd8szhpc
	I1124 03:12:34.793026  658811 out.go:252]   - Configuring RBAC rules ...
	I1124 03:12:34.793125  658811 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:12:34.793213  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:12:34.793344  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:12:34.793455  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:12:34.793557  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:12:34.793642  658811 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:12:34.793774  658811 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:12:34.793810  658811 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:12:34.793851  658811 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:12:34.793857  658811 kubeadm.go:319] 
	I1124 03:12:34.793964  658811 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:12:34.793973  658811 kubeadm.go:319] 
	I1124 03:12:34.794046  658811 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:12:34.794053  658811 kubeadm.go:319] 
	I1124 03:12:34.794074  658811 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:12:34.794151  658811 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:12:34.794229  658811 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:12:34.794239  658811 kubeadm.go:319] 
	I1124 03:12:34.794318  658811 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:12:34.794327  658811 kubeadm.go:319] 
	I1124 03:12:34.794375  658811 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:12:34.794381  658811 kubeadm.go:319] 
	I1124 03:12:34.794424  658811 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:12:34.794490  658811 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:12:34.794554  658811 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:12:34.794560  658811 kubeadm.go:319] 
	I1124 03:12:34.794633  658811 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:12:34.794705  658811 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:12:34.794712  658811 kubeadm.go:319] 
	I1124 03:12:34.794781  658811 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6opk0j.95uwfc60sd8szhpc \
	I1124 03:12:34.794955  658811 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:12:34.794990  658811 kubeadm.go:319] 	--control-plane 
	I1124 03:12:34.794996  658811 kubeadm.go:319] 
	I1124 03:12:34.795133  658811 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:12:34.795142  658811 kubeadm.go:319] 
	I1124 03:12:34.795208  658811 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6opk0j.95uwfc60sd8szhpc \
	I1124 03:12:34.795304  658811 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:12:34.795316  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:34.795322  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:34.796503  658811 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1124 03:12:29.901574  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:32.399665  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:32.206353  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:34.206828  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:34.667383  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:35.167626  650744 pod_ready.go:94] pod "coredns-5dd5756b68-5nwx9" is "Ready"
	I1124 03:12:35.167652  650744 pod_ready.go:86] duration metric: took 36.006547637s for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.170471  650744 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.174915  650744 pod_ready.go:94] pod "etcd-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.174952  650744 pod_ready.go:86] duration metric: took 4.460425ms for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.178276  650744 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.181797  650744 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.181815  650744 pod_ready.go:86] duration metric: took 3.521385ms for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.184086  650744 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.364640  650744 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.364666  650744 pod_ready.go:86] duration metric: took 180.561055ms for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.566321  650744 pod_ready.go:83] waiting for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.965760  650744 pod_ready.go:94] pod "kube-proxy-r82jh" is "Ready"
	I1124 03:12:35.965786  650744 pod_ready.go:86] duration metric: took 399.441601ms for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.166112  650744 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.564858  650744 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-579951" is "Ready"
	I1124 03:12:36.564911  650744 pod_ready.go:86] duration metric: took 398.774389ms for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.564927  650744 pod_ready.go:40] duration metric: took 37.40842222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:36.606666  650744 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 03:12:36.609650  650744 out.go:203] 
	W1124 03:12:36.610839  650744 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 03:12:36.611943  650744 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:12:36.613009  650744 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-579951" cluster and "default" namespace by default
	I1124 03:12:34.797545  658811 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:12:34.801904  658811 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:12:34.801919  658811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:12:34.815659  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:12:35.008985  658811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:12:35.009118  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-284604 minikube.k8s.io/updated_at=2025_11_24T03_12_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=embed-certs-284604 minikube.k8s.io/primary=true
	I1124 03:12:35.009137  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:35.019423  658811 ops.go:34] apiserver oom_adj: -16
	I1124 03:12:35.098937  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:35.600025  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:36.099882  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:36.599914  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:37.099714  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:37.599861  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:38.098989  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:38.599248  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.099379  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.599598  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.664570  658811 kubeadm.go:1114] duration metric: took 4.655535544s to wait for elevateKubeSystemPrivileges
	I1124 03:12:39.664621  658811 kubeadm.go:403] duration metric: took 17.206413974s to StartCluster
	I1124 03:12:39.664642  658811 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:39.664720  658811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:39.666858  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:39.667137  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:12:39.667148  658811 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:39.667230  658811 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:39.667331  658811 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-284604"
	I1124 03:12:39.667356  658811 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-284604"
	I1124 03:12:39.667360  658811 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:39.667396  658811 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:12:39.667427  658811 addons.go:70] Setting default-storageclass=true in profile "embed-certs-284604"
	I1124 03:12:39.667451  658811 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-284604"
	I1124 03:12:39.667810  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.667990  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.668614  658811 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:39.670239  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:39.693324  658811 addons.go:239] Setting addon default-storageclass=true in "embed-certs-284604"
	I1124 03:12:39.693377  658811 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:12:39.693617  658811 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 03:12:34.900232  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:36.901987  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:39.399311  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:39.693843  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.695301  658811 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:39.695324  658811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:39.695401  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:39.723273  658811 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:39.723298  658811 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:39.723378  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:39.730678  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:39.746663  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:39.790082  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:12:39.807223  658811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:39.854663  658811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:39.859938  658811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:39.988561  658811 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 03:12:39.990213  658811 node_ready.go:35] waiting up to 6m0s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:12:40.170444  658811 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1124 03:12:36.707151  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:39.206261  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:41.206507  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	I1124 03:12:40.171595  658811 addons.go:530] duration metric: took 504.363947ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:12:40.492653  658811 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-284604" context rescaled to 1 replicas
	W1124 03:12:41.992667  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:43.993353  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:41.399566  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:43.899302  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:43.705614  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:45.706618  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:45.993493  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:47.993708  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:46.399440  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:48.399607  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:48.205812  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:50.206724  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 24 03:12:22 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:22.05833608Z" level=info msg="Started container" PID=1752 containerID=d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn/dashboard-metrics-scraper id=0f1d9264-a1dc-44af-a832-50ec6f2cad89 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e1d97e02735f8d8a4110cf0f3166803dab09205162c9400fdaa3b5f617ed4c73
	Nov 24 03:12:23 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:22.999882149Z" level=info msg="Removing container: 98ab232be61532c8216c25ac45b87b60ae9a5888ad784c700a95d30a80b1ca01" id=f92f4ab5-1ae3-46f5-9542-cf1040e4f325 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:12:23 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:23.01125384Z" level=info msg="Removed container 98ab232be61532c8216c25ac45b87b60ae9a5888ad784c700a95d30a80b1ca01: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn/dashboard-metrics-scraper" id=f92f4ab5-1ae3-46f5-9542-cf1040e4f325 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.021046515Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=6c6e6fb6-37ab-4096-af30-11efd583ef2f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.0243332Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=58d306b0-c16a-4ee1-88d9-0edf7ad638bb name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.0257834Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=ae7a0748-c804-4b7e-8b1d-69a4a4b55270 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.026047417Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.034987628Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.035235873Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/43aacfa3d0e968d79616b3b2f975a15475873bfc242f6247c78c0391e942a6be/merged/etc/passwd: no such file or directory"
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.03526472Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/43aacfa3d0e968d79616b3b2f975a15475873bfc242f6247c78c0391e942a6be/merged/etc/group: no such file or directory"
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.035551288Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.075325717Z" level=info msg="Created container cb140932ac86175b67bebb44bcf5349167921dded9f33f07bf73f2c99536262b: kube-system/storage-provisioner/storage-provisioner" id=ae7a0748-c804-4b7e-8b1d-69a4a4b55270 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.076561196Z" level=info msg="Starting container: cb140932ac86175b67bebb44bcf5349167921dded9f33f07bf73f2c99536262b" id=8c7ca3d3-b515-41ad-b3a0-18ebf49f11eb name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:12:30 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:30.080857664Z" level=info msg="Started container" PID=1766 containerID=cb140932ac86175b67bebb44bcf5349167921dded9f33f07bf73f2c99536262b description=kube-system/storage-provisioner/storage-provisioner id=8c7ca3d3-b515-41ad-b3a0-18ebf49f11eb name=/runtime.v1.RuntimeService/StartContainer sandboxID=905c81ffe1ddece5ca63e1255676b07b649e31828dcebaca14ef8f7519923f87
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.887369216Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b9817382-fd98-4752-b149-e1369e1ba283 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.888227302Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b9a62157-9148-4e57-8e35-cce48968bf87 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.889034238Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn/dashboard-metrics-scraper" id=2f44bd97-218b-4556-ba68-fd07c07c8730 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.889165943Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.895910546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.896423845Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.921430676Z" level=info msg="Created container ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn/dashboard-metrics-scraper" id=2f44bd97-218b-4556-ba68-fd07c07c8730 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.92180592Z" level=info msg="Starting container: ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a" id=eeaaec46-8aeb-4aa4-9cdf-e058f94aae94 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:12:44 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:44.923547401Z" level=info msg="Started container" PID=1802 containerID=ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn/dashboard-metrics-scraper id=eeaaec46-8aeb-4aa4-9cdf-e058f94aae94 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e1d97e02735f8d8a4110cf0f3166803dab09205162c9400fdaa3b5f617ed4c73
	Nov 24 03:12:45 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:45.057279061Z" level=info msg="Removing container: d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab" id=6b071593-a11f-4331-a6b0-0b3eb218da12 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:12:45 old-k8s-version-579951 crio[578]: time="2025-11-24T03:12:45.066785843Z" level=info msg="Removed container d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn/dashboard-metrics-scraper" id=6b071593-a11f-4331-a6b0-0b3eb218da12 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	ea10d1278a0b1       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   2                   e1d97e02735f8       dashboard-metrics-scraper-5f989dc9cf-lbkcn       kubernetes-dashboard
	cb140932ac861       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   905c81ffe1dde       storage-provisioner                              kube-system
	c0829291d94ab       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   35 seconds ago      Running             kubernetes-dashboard        0                   110f8c07eb8f1       kubernetes-dashboard-8694d4445c-8b2mk            kubernetes-dashboard
	4c225ee065df8       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           54 seconds ago      Running             coredns                     0                   c390473ebd07c       coredns-5dd5756b68-5nwx9                         kube-system
	b2fb7244da7c5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   9832406d11eff       busybox                                          default
	ed35b2d105195       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   406b11cc9cfe2       kindnet-gdpzl                                    kube-system
	cbd2e7dfcfb37       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   905c81ffe1dde       storage-provisioner                              kube-system
	bbc5f27e635d1       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           54 seconds ago      Running             kube-proxy                  0                   6e9a5c6619824       kube-proxy-r82jh                                 kube-system
	cc8b5ee4851c9       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           57 seconds ago      Running             kube-apiserver              0                   951c4b0d2527d       kube-apiserver-old-k8s-version-579951            kube-system
	3176f2d8220ea       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           57 seconds ago      Running             kube-controller-manager     0                   5a6eba6528247       kube-controller-manager-old-k8s-version-579951   kube-system
	30d22d684ad75       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           57 seconds ago      Running             etcd                        0                   0e2ac993b09bd       etcd-old-k8s-version-579951                      kube-system
	3356da3bf9c82       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           58 seconds ago      Running             kube-scheduler              0                   299621d25923a       kube-scheduler-old-k8s-version-579951            kube-system
	
	
	==> coredns [4c225ee065df81dadf568669357be2b97899826cbcb60f9c3ac3b714637ac073] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:60204 - 51660 "HINFO IN 4067648946028489573.5797369737411090544. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.082919287s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-579951
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-579951
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=old-k8s-version-579951
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_10_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:10:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-579951
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:12:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:12:28 +0000   Mon, 24 Nov 2025 03:10:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:12:28 +0000   Mon, 24 Nov 2025 03:10:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:12:28 +0000   Mon, 24 Nov 2025 03:10:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:12:28 +0000   Mon, 24 Nov 2025 03:11:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-579951
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                5d61a30e-9821-4be7-b90f-0f413e931a19
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-5dd5756b68-5nwx9                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-old-k8s-version-579951                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-gdpzl                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-579951             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-controller-manager-old-k8s-version-579951    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-r82jh                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-579951             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-lbkcn        0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-8b2mk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m2s               kubelet          Node old-k8s-version-579951 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s               kubelet          Node old-k8s-version-579951 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s               kubelet          Node old-k8s-version-579951 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m2s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node old-k8s-version-579951 event: Registered Node old-k8s-version-579951 in Controller
	  Normal  NodeReady                96s                kubelet          Node old-k8s-version-579951 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node old-k8s-version-579951 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node old-k8s-version-579951 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node old-k8s-version-579951 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           42s                node-controller  Node old-k8s-version-579951 event: Registered Node old-k8s-version-579951 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [30d22d684ad7501e38080ff45bbe87f71a21252754ba692fc20125e3845f807a] <==
	{"level":"info","ts":"2025-11-24T03:12:11.242497Z","caller":"traceutil/trace.go:171","msg":"trace[1928993192] transaction","detail":"{read_only:false; response_revision:534; number_of_response:1; }","duration":"119.235453ms","start":"2025-11-24T03:12:11.123251Z","end":"2025-11-24T03:12:11.242486Z","steps":["trace[1928993192] 'process raft request'  (duration: 118.851095ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.4454Z","caller":"traceutil/trace.go:171","msg":"trace[2059375659] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"191.875542ms","start":"2025-11-24T03:12:11.253494Z","end":"2025-11-24T03:12:11.44537Z","steps":["trace[2059375659] 'process raft request'  (duration: 129.376271ms)","trace[2059375659] 'compare'  (duration: 62.274794ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:11.445424Z","caller":"traceutil/trace.go:171","msg":"trace[1401579851] transaction","detail":"{read_only:false; response_revision:542; number_of_response:1; }","duration":"137.820678ms","start":"2025-11-24T03:12:11.307589Z","end":"2025-11-24T03:12:11.445409Z","steps":["trace[1401579851] 'process raft request'  (duration: 137.774909ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.445464Z","caller":"traceutil/trace.go:171","msg":"trace[1394475637] transaction","detail":"{read_only:false; response_revision:541; number_of_response:1; }","duration":"188.804117ms","start":"2025-11-24T03:12:11.256653Z","end":"2025-11-24T03:12:11.445457Z","steps":["trace[1394475637] 'process raft request'  (duration: 188.651582ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.44549Z","caller":"traceutil/trace.go:171","msg":"trace[1132775598] transaction","detail":"{read_only:false; response_revision:540; number_of_response:1; }","duration":"191.88256ms","start":"2025-11-24T03:12:11.253602Z","end":"2025-11-24T03:12:11.445484Z","steps":["trace[1132775598] 'process raft request'  (duration: 191.67009ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.445483Z","caller":"traceutil/trace.go:171","msg":"trace[1462021919] linearizableReadLoop","detail":"{readStateIndex:568; appliedIndex:566; }","duration":"189.426898ms","start":"2025-11-24T03:12:11.256037Z","end":"2025-11-24T03:12:11.445464Z","steps":["trace[1462021919] 'read index received'  (duration: 46.454859ms)","trace[1462021919] 'applied index is now lower than readState.Index'  (duration: 142.970544ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T03:12:11.445565Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.517167ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper\" ","response":"range_response_count:1 size:4430"}
	{"level":"info","ts":"2025-11-24T03:12:11.446067Z","caller":"traceutil/trace.go:171","msg":"trace[1728946821] range","detail":"{range_begin:/registry/deployments/kubernetes-dashboard/dashboard-metrics-scraper; range_end:; response_count:1; response_revision:542; }","duration":"190.024276ms","start":"2025-11-24T03:12:11.256025Z","end":"2025-11-24T03:12:11.446049Z","steps":["trace[1728946821] 'agreement among raft nodes before linearized reading'  (duration: 189.486106ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.635558Z","caller":"traceutil/trace.go:171","msg":"trace[1035930019] transaction","detail":"{read_only:false; response_revision:551; number_of_response:1; }","duration":"174.433545ms","start":"2025-11-24T03:12:11.461088Z","end":"2025-11-24T03:12:11.635522Z","steps":["trace[1035930019] 'process raft request'  (duration: 87.83715ms)","trace[1035930019] 'compare'  (duration: 86.397039ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:11.635588Z","caller":"traceutil/trace.go:171","msg":"trace[1494662236] transaction","detail":"{read_only:false; response_revision:552; number_of_response:1; }","duration":"170.608451ms","start":"2025-11-24T03:12:11.464962Z","end":"2025-11-24T03:12:11.635571Z","steps":["trace[1494662236] 'process raft request'  (duration: 170.500066ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.635812Z","caller":"traceutil/trace.go:171","msg":"trace[1441150717] transaction","detail":"{read_only:false; response_revision:554; number_of_response:1; }","duration":"169.981505ms","start":"2025-11-24T03:12:11.465818Z","end":"2025-11-24T03:12:11.635799Z","steps":["trace[1441150717] 'process raft request'  (duration: 169.888533ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.635831Z","caller":"traceutil/trace.go:171","msg":"trace[1707450590] transaction","detail":"{read_only:false; response_revision:553; number_of_response:1; }","duration":"170.664537ms","start":"2025-11-24T03:12:11.465143Z","end":"2025-11-24T03:12:11.635807Z","steps":["trace[1707450590] 'process raft request'  (duration: 170.385783ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:11.805757Z","caller":"traceutil/trace.go:171","msg":"trace[437685081] linearizableReadLoop","detail":"{readStateIndex:584; appliedIndex:582; }","duration":"163.009648ms","start":"2025-11-24T03:12:11.64273Z","end":"2025-11-24T03:12:11.805739Z","steps":["trace[437685081] 'read index received'  (duration: 34.143399ms)","trace[437685081] 'applied index is now lower than readState.Index'  (duration: 128.865469ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:11.805781Z","caller":"traceutil/trace.go:171","msg":"trace[220475867] transaction","detail":"{read_only:false; response_revision:556; number_of_response:1; }","duration":"164.03594ms","start":"2025-11-24T03:12:11.641722Z","end":"2025-11-24T03:12:11.805758Z","steps":["trace[220475867] 'process raft request'  (duration: 117.312132ms)","trace[220475867] 'compare'  (duration: 46.544376ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:11.805831Z","caller":"traceutil/trace.go:171","msg":"trace[1577574396] transaction","detail":"{read_only:false; response_revision:557; number_of_response:1; }","duration":"162.314634ms","start":"2025-11-24T03:12:11.643501Z","end":"2025-11-24T03:12:11.805816Z","steps":["trace[1577574396] 'process raft request'  (duration: 162.190784ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:12:11.805918Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.158512ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-8694d4445c\" ","response":"range_response_count:1 size:3138"}
	{"level":"info","ts":"2025-11-24T03:12:11.805957Z","caller":"traceutil/trace.go:171","msg":"trace[1986015274] range","detail":"{range_begin:/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-8694d4445c; range_end:; response_count:1; response_revision:557; }","duration":"163.231743ms","start":"2025-11-24T03:12:11.642713Z","end":"2025-11-24T03:12:11.805944Z","steps":["trace[1986015274] 'agreement among raft nodes before linearized reading'  (duration: 163.128265ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:12:11.805974Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.681958ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-5nwx9\" ","response":"range_response_count:1 size:4992"}
	{"level":"info","ts":"2025-11-24T03:12:11.806031Z","caller":"traceutil/trace.go:171","msg":"trace[274014986] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-5nwx9; range_end:; response_count:1; response_revision:557; }","duration":"142.745614ms","start":"2025-11-24T03:12:11.663275Z","end":"2025-11-24T03:12:11.80602Z","steps":["trace[274014986] 'agreement among raft nodes before linearized reading'  (duration: 142.656705ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:12.647702Z","caller":"traceutil/trace.go:171","msg":"trace[372732901] transaction","detail":"{read_only:false; response_revision:558; number_of_response:1; }","duration":"297.099858ms","start":"2025-11-24T03:12:12.350588Z","end":"2025-11-24T03:12:12.647688Z","steps":["trace[372732901] 'process raft request'  (duration: 297.006221ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:28.792303Z","caller":"traceutil/trace.go:171","msg":"trace[1974596828] linearizableReadLoop","detail":"{readStateIndex:623; appliedIndex:622; }","duration":"128.95868ms","start":"2025-11-24T03:12:28.663325Z","end":"2025-11-24T03:12:28.792284Z","steps":["trace[1974596828] 'read index received'  (duration: 128.868203ms)","trace[1974596828] 'applied index is now lower than readState.Index'  (duration: 89.618µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:28.792419Z","caller":"traceutil/trace.go:171","msg":"trace[1173164363] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"134.013278ms","start":"2025-11-24T03:12:28.658379Z","end":"2025-11-24T03:12:28.792392Z","steps":["trace[1173164363] 'process raft request'  (duration: 133.746689ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T03:12:28.792498Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"129.168972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-5nwx9\" ","response":"range_response_count:1 size:4992"}
	{"level":"info","ts":"2025-11-24T03:12:28.792535Z","caller":"traceutil/trace.go:171","msg":"trace[1597943492] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-5nwx9; range_end:; response_count:1; response_revision:590; }","duration":"129.231863ms","start":"2025-11-24T03:12:28.66329Z","end":"2025-11-24T03:12:28.792522Z","steps":["trace[1597943492] 'agreement among raft nodes before linearized reading'  (duration: 129.073023ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:28.819172Z","caller":"traceutil/trace.go:171","msg":"trace[328415600] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"145.87701ms","start":"2025-11-24T03:12:28.673282Z","end":"2025-11-24T03:12:28.819159Z","steps":["trace[328415600] 'process raft request'  (duration: 145.758687ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:12:53 up  1:55,  0 user,  load average: 5.46, 4.30, 2.74
	Linux old-k8s-version-579951 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [ed35b2d1051955151fc61b8cc47f4e8d1bd605dd3fe30e9571e13bb8c6a72a2d] <==
	I1124 03:11:59.400660       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:11:59.400924       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 03:11:59.401114       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:11:59.401135       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:11:59.401163       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:11:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:11:59.698364       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:11:59.698437       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:11:59.698453       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:11:59.698630       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:11:59.898520       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:11:59.898575       1 metrics.go:72] Registering metrics
	I1124 03:11:59.898661       1 controller.go:711] "Syncing nftables rules"
	I1124 03:12:09.699106       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:12:09.699156       1 main.go:301] handling current node
	I1124 03:12:19.699021       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:12:19.699060       1 main.go:301] handling current node
	I1124 03:12:29.699065       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:12:29.699104       1 main.go:301] handling current node
	I1124 03:12:39.700248       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:12:39.700293       1 main.go:301] handling current node
	I1124 03:12:49.702554       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 03:12:49.702591       1 main.go:301] handling current node
	
	
	==> kube-apiserver [cc8b5ee4851c9ae1241dd77995f3d1a2e725abb08136f47c106f5adf7f25f2a7] <==
	I1124 03:11:57.986735       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:11:58.003966       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 03:11:58.003999       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 03:11:58.004342       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 03:11:58.004385       1 shared_informer.go:318] Caches are synced for configmaps
	I1124 03:11:58.004840       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 03:11:58.006094       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 03:11:58.006418       1 aggregator.go:166] initial CRD sync complete...
	I1124 03:11:58.006442       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 03:11:58.006449       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 03:11:58.006458       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:11:58.007501       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1124 03:11:58.012545       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 03:11:58.908827       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:11:58.997530       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 03:11:59.028113       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 03:11:59.048840       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:11:59.055267       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:11:59.063029       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 03:11:59.101839       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.105.51.91"}
	I1124 03:11:59.113207       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.100.35.195"}
	I1124 03:12:10.894107       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:12:10.930596       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 03:12:11.029282       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 03:12:11.029282       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [3176f2d8220eaa411e72fa77d582041c78e4d0b8acbd739cd01992ec3cfa7230] <==
	I1124 03:12:11.123058       1 shared_informer.go:318] Caches are synced for stateful set
	I1124 03:12:11.243804       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1124 03:12:11.303604       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="257.087262ms"
	I1124 03:12:11.303787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.216µs"
	I1124 03:12:11.443085       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 03:12:11.446611       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 03:12:11.446642       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 03:12:11.447806       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-lbkcn"
	I1124 03:12:11.448445       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-8b2mk"
	I1124 03:12:11.460875       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="339.653085ms"
	I1124 03:12:11.461314       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="218.762514ms"
	I1124 03:12:11.637441       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="176.484399ms"
	I1124 03:12:11.637550       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="176.194963ms"
	I1124 03:12:11.637965       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="106.179µs"
	I1124 03:12:11.808204       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="106.396µs"
	I1124 03:12:11.812463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="174.960918ms"
	I1124 03:12:11.812563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="58.675µs"
	I1124 03:12:18.034326       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.967331ms"
	I1124 03:12:18.034511       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="84.81µs"
	I1124 03:12:22.007826       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="73.119µs"
	I1124 03:12:23.010750       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="72.582µs"
	I1124 03:12:24.014463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="75.01µs"
	I1124 03:12:35.094478       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="8.729619ms"
	I1124 03:12:35.094607       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.416µs"
	I1124 03:12:45.067804       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.225µs"
	
	
	==> kube-proxy [bbc5f27e635d1171390cb9cc082c8e71358be7dd9d3966888be81466bec32466] <==
	I1124 03:11:59.290816       1 server_others.go:69] "Using iptables proxy"
	I1124 03:11:59.299166       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1124 03:11:59.316614       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:11:59.318808       1 server_others.go:152] "Using iptables Proxier"
	I1124 03:11:59.318830       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 03:11:59.318836       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 03:11:59.318866       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 03:11:59.319135       1 server.go:846] "Version info" version="v1.28.0"
	I1124 03:11:59.319157       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:11:59.320170       1 config.go:188] "Starting service config controller"
	I1124 03:11:59.320208       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 03:11:59.320246       1 config.go:97] "Starting endpoint slice config controller"
	I1124 03:11:59.320251       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 03:11:59.320486       1 config.go:315] "Starting node config controller"
	I1124 03:11:59.320510       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 03:11:59.421209       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1124 03:11:59.421234       1 shared_informer.go:318] Caches are synced for service config
	I1124 03:11:59.421308       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3356da3bf9c8232ed305911fa37644fd0513640f4477238b1a7e39b8e438c2a0] <==
	I1124 03:11:56.081101       1 serving.go:348] Generated self-signed cert in-memory
	W1124 03:11:57.959345       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 03:11:57.959377       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 03:11:57.959389       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 03:11:57.959399       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 03:11:57.987598       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1124 03:11:57.990944       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:11:57.995275       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1124 03:11:57.995390       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1124 03:11:57.995566       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:11:57.996109       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 03:11:58.096298       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 03:12:11 old-k8s-version-579951 kubelet[737]: I1124 03:12:11.457513     737 topology_manager.go:215] "Topology Admit Handler" podUID="36c6705a-eceb-43a7-9fce-96446385e0e3" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-8b2mk"
	Nov 24 03:12:11 old-k8s-version-579951 kubelet[737]: I1124 03:12:11.458492     737 topology_manager.go:215] "Topology Admit Handler" podUID="5096b231-1ea7-4e83-9132-f8255b42e564" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-lbkcn"
	Nov 24 03:12:11 old-k8s-version-579951 kubelet[737]: I1124 03:12:11.635061     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfxkr\" (UniqueName: \"kubernetes.io/projected/36c6705a-eceb-43a7-9fce-96446385e0e3-kube-api-access-nfxkr\") pod \"kubernetes-dashboard-8694d4445c-8b2mk\" (UID: \"36c6705a-eceb-43a7-9fce-96446385e0e3\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8b2mk"
	Nov 24 03:12:11 old-k8s-version-579951 kubelet[737]: I1124 03:12:11.635146     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/36c6705a-eceb-43a7-9fce-96446385e0e3-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-8b2mk\" (UID: \"36c6705a-eceb-43a7-9fce-96446385e0e3\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8b2mk"
	Nov 24 03:12:11 old-k8s-version-579951 kubelet[737]: I1124 03:12:11.635192     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5096b231-1ea7-4e83-9132-f8255b42e564-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-lbkcn\" (UID: \"5096b231-1ea7-4e83-9132-f8255b42e564\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn"
	Nov 24 03:12:11 old-k8s-version-579951 kubelet[737]: I1124 03:12:11.635259     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcf8n\" (UniqueName: \"kubernetes.io/projected/5096b231-1ea7-4e83-9132-f8255b42e564-kube-api-access-gcf8n\") pod \"dashboard-metrics-scraper-5f989dc9cf-lbkcn\" (UID: \"5096b231-1ea7-4e83-9132-f8255b42e564\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn"
	Nov 24 03:12:21 old-k8s-version-579951 kubelet[737]: I1124 03:12:21.991527     737 scope.go:117] "RemoveContainer" containerID="98ab232be61532c8216c25ac45b87b60ae9a5888ad784c700a95d30a80b1ca01"
	Nov 24 03:12:22 old-k8s-version-579951 kubelet[737]: I1124 03:12:22.008860     737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-8b2mk" podStartSLOduration=5.852248983 podCreationTimestamp="2025-11-24 03:12:11 +0000 UTC" firstStartedPulling="2025-11-24 03:12:12.347825025 +0000 UTC m=+17.556511071" lastFinishedPulling="2025-11-24 03:12:17.503019624 +0000 UTC m=+22.711705680" observedRunningTime="2025-11-24 03:12:18.018594918 +0000 UTC m=+23.227280976" watchObservedRunningTime="2025-11-24 03:12:22.007443592 +0000 UTC m=+27.216129648"
	Nov 24 03:12:22 old-k8s-version-579951 kubelet[737]: I1124 03:12:22.996930     737 scope.go:117] "RemoveContainer" containerID="98ab232be61532c8216c25ac45b87b60ae9a5888ad784c700a95d30a80b1ca01"
	Nov 24 03:12:22 old-k8s-version-579951 kubelet[737]: I1124 03:12:22.997253     737 scope.go:117] "RemoveContainer" containerID="d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab"
	Nov 24 03:12:22 old-k8s-version-579951 kubelet[737]: E1124 03:12:22.997639     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbkcn_kubernetes-dashboard(5096b231-1ea7-4e83-9132-f8255b42e564)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn" podUID="5096b231-1ea7-4e83-9132-f8255b42e564"
	Nov 24 03:12:24 old-k8s-version-579951 kubelet[737]: I1124 03:12:24.001362     737 scope.go:117] "RemoveContainer" containerID="d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab"
	Nov 24 03:12:24 old-k8s-version-579951 kubelet[737]: E1124 03:12:24.001773     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbkcn_kubernetes-dashboard(5096b231-1ea7-4e83-9132-f8255b42e564)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn" podUID="5096b231-1ea7-4e83-9132-f8255b42e564"
	Nov 24 03:12:30 old-k8s-version-579951 kubelet[737]: I1124 03:12:30.019981     737 scope.go:117] "RemoveContainer" containerID="cbd2e7dfcfb37a19af31d60fb1906fc2f2ff1f04f8b5e0b378efbf444e50673f"
	Nov 24 03:12:32 old-k8s-version-579951 kubelet[737]: I1124 03:12:32.061272     737 scope.go:117] "RemoveContainer" containerID="d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab"
	Nov 24 03:12:32 old-k8s-version-579951 kubelet[737]: E1124 03:12:32.061670     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbkcn_kubernetes-dashboard(5096b231-1ea7-4e83-9132-f8255b42e564)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn" podUID="5096b231-1ea7-4e83-9132-f8255b42e564"
	Nov 24 03:12:44 old-k8s-version-579951 kubelet[737]: I1124 03:12:44.886794     737 scope.go:117] "RemoveContainer" containerID="d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab"
	Nov 24 03:12:44 old-k8s-version-579951 kubelet[737]: E1124 03:12:44.946679     737 cadvisor_stats_provider.go:444] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/crio-ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a.scope/container\": RecentStats: unable to find data in memory cache], [\"/system.slice/crio-ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a.scope\": RecentStats: unable to find data in memory cache]"
	Nov 24 03:12:45 old-k8s-version-579951 kubelet[737]: I1124 03:12:45.056132     737 scope.go:117] "RemoveContainer" containerID="d8fa68439f6b648a14c1987fef6c7f93597878fe0cea619d1169d3eee3c318ab"
	Nov 24 03:12:45 old-k8s-version-579951 kubelet[737]: I1124 03:12:45.056366     737 scope.go:117] "RemoveContainer" containerID="ea10d1278a0b11b837ca35bd30401c5f64386fc22035d6c5a789542580824d8a"
	Nov 24 03:12:45 old-k8s-version-579951 kubelet[737]: E1124 03:12:45.056738     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-lbkcn_kubernetes-dashboard(5096b231-1ea7-4e83-9132-f8255b42e564)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-lbkcn" podUID="5096b231-1ea7-4e83-9132-f8255b42e564"
	Nov 24 03:12:48 old-k8s-version-579951 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 03:12:48 old-k8s-version-579951 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 03:12:48 old-k8s-version-579951 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 03:12:48 old-k8s-version-579951 systemd[1]: kubelet.service: Consumed 1.454s CPU time.
	
	
	==> kubernetes-dashboard [c0829291d94ab54222f0c979e770045678982177db6d180fb2f94c79be1258de] <==
	2025/11/24 03:12:17 Starting overwatch
	2025/11/24 03:12:17 Using namespace: kubernetes-dashboard
	2025/11/24 03:12:17 Using in-cluster config to connect to apiserver
	2025/11/24 03:12:17 Using secret token for csrf signing
	2025/11/24 03:12:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 03:12:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 03:12:17 Successful initial request to the apiserver, version: v1.28.0
	2025/11/24 03:12:17 Generating JWE encryption key
	2025/11/24 03:12:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 03:12:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 03:12:17 Initializing JWE encryption key from synchronized object
	2025/11/24 03:12:17 Creating in-cluster Sidecar client
	2025/11/24 03:12:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 03:12:17 Serving insecurely on HTTP port: 9090
	2025/11/24 03:12:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [cb140932ac86175b67bebb44bcf5349167921dded9f33f07bf73f2c99536262b] <==
	I1124 03:12:30.107261       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:12:30.123681       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:12:30.123797       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 03:12:47.516892       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:12:47.517027       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-579951_30a07f27-8ce9-4d33-aff4-87779858de0d!
	I1124 03:12:47.517013       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"59a77692-accc-462a-ac9b-8cd00bada505", APIVersion:"v1", ResourceVersion:"613", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-579951_30a07f27-8ce9-4d33-aff4-87779858de0d became leader
	I1124 03:12:47.617657       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-579951_30a07f27-8ce9-4d33-aff4-87779858de0d!
	
	
	==> storage-provisioner [cbd2e7dfcfb37a19af31d60fb1906fc2f2ff1f04f8b5e0b378efbf444e50673f] <==
	I1124 03:11:59.261535       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 03:12:29.263216       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-579951 -n old-k8s-version-579951
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-579951 -n old-k8s-version-579951: exit status 2 (335.905212ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-579951 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-284604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-284604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (243.969471ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:13:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-284604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-284604 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-284604 describe deploy/metrics-server -n kube-system: exit status 1 (57.005931ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-284604 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-284604
helpers_test.go:243: (dbg) docker inspect embed-certs-284604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa",
	        "Created": "2025-11-24T03:12:13.144496823Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 660203,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:12:13.190511304Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa/hostname",
	        "HostsPath": "/var/lib/docker/containers/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa/hosts",
	        "LogPath": "/var/lib/docker/containers/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa-json.log",
	        "Name": "/embed-certs-284604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-284604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-284604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa",
	                "LowerDir": "/var/lib/docker/overlay2/af574c392758f969970a6326011c4ed57be640b8c05a9d111cafa150f3cb303b-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af574c392758f969970a6326011c4ed57be640b8c05a9d111cafa150f3cb303b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af574c392758f969970a6326011c4ed57be640b8c05a9d111cafa150f3cb303b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af574c392758f969970a6326011c4ed57be640b8c05a9d111cafa150f3cb303b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-284604",
	                "Source": "/var/lib/docker/volumes/embed-certs-284604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-284604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-284604",
	                "name.minikube.sigs.k8s.io": "embed-certs-284604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "9737cc48e2bd654491985bf7bd03fcf89bc912e7d1ba350ae8a495a3bf15dba8",
	            "SandboxKey": "/var/run/docker/netns/9737cc48e2bd",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33498"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33499"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33502"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33500"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33501"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-284604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1d9fd759284ca1283df730e0f7d581869748db9e3cd1619451e948defda88535",
	                    "EndpointID": "a8d6077492e15c6f2ad70d1d0a4b2aa0d29f03570d7a73cc70e5be68634fb391",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "c2:45:73:49:fe:10",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-284604",
	                        "65dda7ef92bd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-284604 -n embed-certs-284604
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-284604 logs -n 25
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p newest-cni-438041 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ stop    │ -p old-k8s-version-579951 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-993813 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-603010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993813 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ stop    │ -p no-preload-603010 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-579951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p old-k8s-version-579951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-438041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ image   │ newest-cni-438041 image list --format=json                                                                                                                                                                                                    │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p newest-cni-438041 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p no-preload-603010 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                                                                                          │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p no-preload-603010 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                                                                                          │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p disable-driver-mounts-242597                                                                                                                                                                                                               │ disable-driver-mounts-242597 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ image   │ old-k8s-version-579951 image list --format=json                                                                                                                                                                                               │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p old-k8s-version-579951 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ -p old-k8s-version-579951                                                                                                                                                                                                                     │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p old-k8s-version-579951                                                                                                                                                                                                                     │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable metrics-server -p embed-certs-284604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:12:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:12:09.055015  658811 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:12:09.055230  658811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:09.055247  658811 out.go:374] Setting ErrFile to fd 2...
	I1124 03:12:09.055253  658811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:09.055468  658811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:12:09.055909  658811 out.go:368] Setting JSON to false
	I1124 03:12:09.056956  658811 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6876,"bootTime":1763947053,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:12:09.057009  658811 start.go:143] virtualization: kvm guest
	I1124 03:12:09.058671  658811 out.go:179] * [embed-certs-284604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:12:09.059850  658811 notify.go:221] Checking for updates...
	I1124 03:12:09.059855  658811 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:12:09.061128  658811 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:12:09.062317  658811 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:09.063358  658811 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:12:09.064255  658811 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:12:09.065078  658811 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:12:09.066407  658811 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.066509  658811 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.066589  658811 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:12:09.066666  658811 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:12:09.089713  658811 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:12:09.089855  658811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:09.145948  658811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:12:09.135562124 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:09.146071  658811 docker.go:319] overlay module found
	I1124 03:12:09.147708  658811 out.go:179] * Using the docker driver based on user configuration
	I1124 03:12:09.148714  658811 start.go:309] selected driver: docker
	I1124 03:12:09.148737  658811 start.go:927] validating driver "docker" against <nil>
	I1124 03:12:09.148747  658811 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:12:09.149338  658811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:09.210343  658811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:12:09.200351707 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:09.210534  658811 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:12:09.210794  658811 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:09.212381  658811 out.go:179] * Using Docker driver with root privileges
	I1124 03:12:09.213398  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:09.213482  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:09.213497  658811 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:12:09.213574  658811 start.go:353] cluster config:
	{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:09.214730  658811 out.go:179] * Starting "embed-certs-284604" primary control-plane node in "embed-certs-284604" cluster
	I1124 03:12:09.215613  658811 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:12:09.216663  658811 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:12:09.217654  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:09.217694  658811 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:12:09.217703  658811 cache.go:65] Caching tarball of preloaded images
	I1124 03:12:09.217732  658811 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:12:09.217791  658811 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:12:09.217808  658811 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:12:09.217977  658811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:12:09.218021  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json: {Name:mkd4898576ebe0ebf6d2ca35fddd33eac8f127df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:09.239944  658811 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:12:09.239962  658811 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:12:09.239976  658811 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:12:09.240004  658811 start.go:360] acquireMachinesLock for embed-certs-284604: {Name:mkd39be5908e1d289ed5af40b6c2b1c510beffd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:12:09.240088  658811 start.go:364] duration metric: took 68.665µs to acquireMachinesLock for "embed-certs-284604"
	I1124 03:12:09.240109  658811 start.go:93] Provisioning new machine with config: &{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:09.240182  658811 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:12:05.014758  656542 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-993813" ...
	I1124 03:12:05.014805  656542 cli_runner.go:164] Run: docker start default-k8s-diff-port-993813
	I1124 03:12:05.297424  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:05.316835  656542 kic.go:430] container "default-k8s-diff-port-993813" state is running.
	I1124 03:12:05.317309  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:05.336690  656542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/config.json ...
	I1124 03:12:05.336923  656542 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:05.336992  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:05.356564  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:05.356863  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:05.356907  656542 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:05.357642  656542 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39256->127.0.0.1:33488: read: connection reset by peer
	I1124 03:12:08.497704  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:12:08.497744  656542 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-993813"
	I1124 03:12:08.497799  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:08.516284  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:08.516620  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:08.516642  656542 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993813 && echo "default-k8s-diff-port-993813" | sudo tee /etc/hostname
	I1124 03:12:08.664299  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:12:08.664399  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:08.683215  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:08.683424  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:08.683440  656542 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993813/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:08.824495  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:08.824534  656542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:08.824571  656542 ubuntu.go:190] setting up certificates
	I1124 03:12:08.824597  656542 provision.go:84] configureAuth start
	I1124 03:12:08.824659  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:08.842592  656542 provision.go:143] copyHostCerts
	I1124 03:12:08.842639  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:08.842651  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:08.842701  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:08.842805  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:08.842813  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:08.842838  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:08.842940  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:08.842950  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:08.842981  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:08.843051  656542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993813 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-993813 localhost minikube]
	I1124 03:12:08.993088  656542 provision.go:177] copyRemoteCerts
	I1124 03:12:08.993141  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:08.993180  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.010481  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.112610  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:09.134182  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 03:12:09.153393  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:12:09.173516  656542 provision.go:87] duration metric: took 348.902104ms to configureAuth
	I1124 03:12:09.173547  656542 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:09.173717  656542 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.173820  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.195519  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:09.195738  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:09.195756  656542 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:09.551404  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:09.551434  656542 machine.go:97] duration metric: took 4.214494542s to provisionDockerMachine
	I1124 03:12:09.551449  656542 start.go:293] postStartSetup for "default-k8s-diff-port-993813" (driver="docker")
	I1124 03:12:09.551463  656542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:09.551533  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:09.551574  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.572440  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.684044  656542 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:09.688328  656542 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:09.688354  656542 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:09.688365  656542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:09.688414  656542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:09.688488  656542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:09.688660  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:09.696023  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:09.725715  656542 start.go:296] duration metric: took 174.248037ms for postStartSetup
	I1124 03:12:09.725795  656542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:09.725851  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.747235  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:06.610202  657716 out.go:252] * Restarting existing docker container for "no-preload-603010" ...
	I1124 03:12:06.610267  657716 cli_runner.go:164] Run: docker start no-preload-603010
	I1124 03:12:06.895418  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:06.913279  657716 kic.go:430] container "no-preload-603010" state is running.
	I1124 03:12:06.913694  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:06.931543  657716 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/config.json ...
	I1124 03:12:06.931779  657716 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:06.931840  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:06.949180  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:06.949422  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:06.949436  657716 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:06.950106  657716 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53738->127.0.0.1:33493: read: connection reset by peer
	I1124 03:12:10.094410  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-603010
	
	I1124 03:12:10.094455  657716 ubuntu.go:182] provisioning hostname "no-preload-603010"
	I1124 03:12:10.094548  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.117277  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.117614  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.117637  657716 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-603010 && echo "no-preload-603010" | sudo tee /etc/hostname
	I1124 03:12:10.272082  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-603010
	
	I1124 03:12:10.272162  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.293197  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.293525  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.293557  657716 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-603010' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-603010/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-603010' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:10.440289  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:10.440322  657716 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:10.440350  657716 ubuntu.go:190] setting up certificates
	I1124 03:12:10.440374  657716 provision.go:84] configureAuth start
	I1124 03:12:10.440443  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:10.458672  657716 provision.go:143] copyHostCerts
	I1124 03:12:10.458743  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:10.458766  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:10.458857  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:10.459021  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:10.459037  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:10.459080  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:10.459183  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:10.459195  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:10.459232  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:10.459323  657716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.no-preload-603010 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-603010]
	I1124 03:12:10.546420  657716 provision.go:177] copyRemoteCerts
	I1124 03:12:10.546503  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:10.546552  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.564799  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:10.669343  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:10.687953  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:12:10.707320  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:12:10.728398  657716 provision.go:87] duration metric: took 288.002675ms to configureAuth
	I1124 03:12:10.728450  657716 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:10.728791  657716 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:10.728992  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.754544  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.754857  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.754907  657716 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:09.846210  656542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:09.851045  656542 fix.go:56] duration metric: took 4.853815531s for fixHost
	I1124 03:12:09.851067  656542 start.go:83] releasing machines lock for "default-k8s-diff-port-993813", held for 4.853861223s
	I1124 03:12:09.851139  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:09.871679  656542 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:09.871744  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.871767  656542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:09.871859  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.897665  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.897832  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.996390  656542 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:10.070447  656542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:10.108350  656542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:10.113659  656542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:10.113732  656542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:10.122258  656542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:12:10.122274  656542 start.go:496] detecting cgroup driver to use...
	I1124 03:12:10.122301  656542 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:10.122333  656542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:10.138420  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:10.151623  656542 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:10.151696  656542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:10.169717  656542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:10.185403  656542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:10.268937  656542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:10.361626  656542 docker.go:234] disabling docker service ...
	I1124 03:12:10.361713  656542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:10.376259  656542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:10.389709  656542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:10.493317  656542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:10.581163  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:10.594309  656542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:10.608489  656542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:10.608559  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.618090  656542 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:10.618147  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.629142  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.639755  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.648289  656542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:10.657390  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.667835  656542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.677148  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.686554  656542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:10.694262  656542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:10.701983  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:10.784645  656542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:13.176259  656542 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.391580237s)
	I1124 03:12:13.176297  656542 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:13.176344  656542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:13.182771  656542 start.go:564] Will wait 60s for crictl version
	I1124 03:12:13.182920  656542 ssh_runner.go:195] Run: which crictl
	I1124 03:12:13.188282  656542 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:13.221129  656542 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:13.221208  656542 ssh_runner.go:195] Run: crio --version
	I1124 03:12:13.256022  656542 ssh_runner.go:195] Run: crio --version
	I1124 03:12:13.289098  656542 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1124 03:12:09.667322  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:11.810684  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:09.241811  658811 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:12:09.242074  658811 start.go:159] libmachine.API.Create for "embed-certs-284604" (driver="docker")
	I1124 03:12:09.242107  658811 client.go:173] LocalClient.Create starting
	I1124 03:12:09.242186  658811 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 03:12:09.242224  658811 main.go:143] libmachine: Decoding PEM data...
	I1124 03:12:09.242246  658811 main.go:143] libmachine: Parsing certificate...
	I1124 03:12:09.242326  658811 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 03:12:09.242354  658811 main.go:143] libmachine: Decoding PEM data...
	I1124 03:12:09.242374  658811 main.go:143] libmachine: Parsing certificate...
	I1124 03:12:09.242824  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:12:09.259427  658811 cli_runner.go:211] docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:12:09.259477  658811 network_create.go:284] running [docker network inspect embed-certs-284604] to gather additional debugging logs...
	I1124 03:12:09.259492  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604
	W1124 03:12:09.275004  658811 cli_runner.go:211] docker network inspect embed-certs-284604 returned with exit code 1
	I1124 03:12:09.275029  658811 network_create.go:287] error running [docker network inspect embed-certs-284604]: docker network inspect embed-certs-284604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-284604 not found
	I1124 03:12:09.275039  658811 network_create.go:289] output of [docker network inspect embed-certs-284604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-284604 not found
	
	** /stderr **
	I1124 03:12:09.275132  658811 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:09.292074  658811 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-58688175ab6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:9d:6a:fc:5c:13} reservation:<nil>}
	I1124 03:12:09.292745  658811 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf033568456f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:35:42:16:23:28} reservation:<nil>}
	I1124 03:12:09.293207  658811 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ecb12099844 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ce:07:19:c6:91:6e} reservation:<nil>}
	I1124 03:12:09.293801  658811 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-50b2e4e61586 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:0d:88:19:7d:df} reservation:<nil>}
	I1124 03:12:09.294406  658811 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6fb41680caed IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:a0:6c:44:95:b2} reservation:<nil>}
	I1124 03:12:09.295273  658811 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eef7f0}
	I1124 03:12:09.295296  658811 network_create.go:124] attempt to create docker network embed-certs-284604 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 03:12:09.295333  658811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-284604 embed-certs-284604
	I1124 03:12:09.341016  658811 network_create.go:108] docker network embed-certs-284604 192.168.94.0/24 created
	I1124 03:12:09.341044  658811 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-284604" container
	I1124 03:12:09.341097  658811 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:12:09.358710  658811 cli_runner.go:164] Run: docker volume create embed-certs-284604 --label name.minikube.sigs.k8s.io=embed-certs-284604 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:12:09.377491  658811 oci.go:103] Successfully created a docker volume embed-certs-284604
	I1124 03:12:09.377565  658811 cli_runner.go:164] Run: docker run --rm --name embed-certs-284604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-284604 --entrypoint /usr/bin/test -v embed-certs-284604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:12:09.757637  658811 oci.go:107] Successfully prepared a docker volume embed-certs-284604
	I1124 03:12:09.757726  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:09.757742  658811 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:12:09.757816  658811 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-284604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:12:13.055592  658811 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-284604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (3.297719307s)
	I1124 03:12:13.055632  658811 kic.go:203] duration metric: took 3.29788472s to extract preloaded images to volume ...
	W1124 03:12:13.055721  658811 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:12:13.055758  658811 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:12:13.055810  658811 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:12:13.124836  658811 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-284604 --name embed-certs-284604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-284604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-284604 --network embed-certs-284604 --ip 192.168.94.2 --volume embed-certs-284604:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:12:13.468642  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Running}}
	I1124 03:12:13.493010  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.520114  658811 cli_runner.go:164] Run: docker exec embed-certs-284604 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:12:13.579438  658811 oci.go:144] the created container "embed-certs-284604" has a running status.
	I1124 03:12:13.579473  658811 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa...
	I1124 03:12:13.686392  658811 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:12:13.719014  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.744934  658811 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:12:13.744979  658811 kic_runner.go:114] Args: [docker exec --privileged embed-certs-284604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:12:13.804379  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.833184  658811 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:13.833391  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:13.865266  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:13.865635  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:13.865670  658811 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:13.866448  658811 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55158->127.0.0.1:33498: read: connection reset by peer
	I1124 03:12:13.290552  656542 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:13.314170  656542 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:13.318716  656542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:13.333300  656542 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:13.333436  656542 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:13.333523  656542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:13.375001  656542 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:13.375027  656542 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:12:13.375078  656542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:13.407152  656542 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:13.407180  656542 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:13.407190  656542 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 03:12:13.407342  656542 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:13.407444  656542 ssh_runner.go:195] Run: crio config
	I1124 03:12:13.468159  656542 cni.go:84] Creating CNI manager for ""
	I1124 03:12:13.468191  656542 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:13.468220  656542 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:13.468251  656542 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993813 NodeName:default-k8s-diff-port-993813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:13.468425  656542 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993813"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:13.468485  656542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:13.480922  656542 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:13.480989  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:13.491437  656542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 03:12:13.510538  656542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:13.531599  656542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 03:12:13.550625  656542 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:13.557123  656542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:13.570105  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:13.687069  656542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:13.711246  656542 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813 for IP: 192.168.76.2
	I1124 03:12:13.711268  656542 certs.go:195] generating shared ca certs ...
	I1124 03:12:13.711287  656542 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:13.711456  656542 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:13.711513  656542 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:13.711526  656542 certs.go:257] generating profile certs ...
	I1124 03:12:13.711642  656542 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key
	I1124 03:12:13.711706  656542 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619
	I1124 03:12:13.711753  656542 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key
	I1124 03:12:13.711996  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:13.712051  656542 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:13.712065  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:13.712101  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:13.712139  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:13.712175  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:13.712240  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:13.712851  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:13.744604  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:13.773924  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:13.797454  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:13.831783  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 03:12:13.870484  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:13.900124  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:13.922822  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:12:13.948171  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:13.977351  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:14.003032  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:14.029032  656542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:14.044929  656542 ssh_runner.go:195] Run: openssl version
	I1124 03:12:14.055102  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:14.069569  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.074149  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.074206  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.129455  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:14.139467  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:14.150460  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.155547  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.155598  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.213122  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:14.224488  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:14.235043  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.239741  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.239796  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.296275  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:14.307247  656542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:14.315784  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:12:14.374911  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:12:14.452037  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:12:14.514532  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:12:14.577046  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:12:14.634822  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:12:14.697600  656542 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:14.697704  656542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:14.697759  656542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:14.736428  656542 cri.go:89] found id: "9d08a55f25f2dea3825f91aa9365cd9a3093b4582eda50f03b22c2ecc00f92c6"
	I1124 03:12:14.736451  656542 cri.go:89] found id: "a7d5f73dd018d0ee98a3ef524ea2c83739dc8c07b34a7298ffb1d288db659329"
	I1124 03:12:14.736458  656542 cri.go:89] found id: "dd990c6cdcef7e1e7305b9fc20b7615dfb761cbe8c5d42a1f61c8b41406cd0a7"
	I1124 03:12:14.736462  656542 cri.go:89] found id: "11357ba44da7473554ad2b8f1e58b742de06b9155b164b6c83a5d2f9beb7830e"
	I1124 03:12:14.736466  656542 cri.go:89] found id: ""
	I1124 03:12:14.736511  656542 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:12:14.754070  656542 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:14Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:14.754156  656542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:14.765200  656542 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:12:14.765224  656542 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:12:14.765273  656542 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:12:14.773243  656542 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:12:14.773947  656542 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993813" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:14.774328  656542 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993813" cluster setting kubeconfig missing "default-k8s-diff-port-993813" context setting]
	I1124 03:12:14.774925  656542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.776519  656542 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:12:14.785657  656542 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 03:12:14.785687  656542 kubeadm.go:602] duration metric: took 20.455875ms to restartPrimaryControlPlane
	I1124 03:12:14.785704  656542 kubeadm.go:403] duration metric: took 88.114399ms to StartCluster
	I1124 03:12:14.785722  656542 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.785796  656542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:14.786941  656542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.787180  656542 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:14.787429  656542 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:14.787487  656542 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:14.787568  656542 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.787584  656542 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.787592  656542 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:12:14.787615  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.788183  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.788464  656542 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.788516  656542 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993813"
	I1124 03:12:14.788466  656542 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.788738  656542 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.788750  656542 addons.go:248] addon dashboard should already be in state true
	I1124 03:12:14.788782  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.789431  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.789731  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.792034  656542 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:14.793166  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:14.820828  656542 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:12:14.821632  656542 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.821655  656542 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:12:14.821731  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.821909  656542 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:12:14.822084  656542 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:14.822112  656542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:14.822188  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.822548  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.827335  656542 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:12:13.173638  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:13.173665  657716 machine.go:97] duration metric: took 6.241868553s to provisionDockerMachine
	I1124 03:12:13.173679  657716 start.go:293] postStartSetup for "no-preload-603010" (driver="docker")
	I1124 03:12:13.173692  657716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:13.173754  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:13.173803  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.199819  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.311414  657716 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:13.316263  657716 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:13.316292  657716 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:13.316304  657716 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:13.316362  657716 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:13.316451  657716 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:13.316564  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:13.330333  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:13.349678  657716 start.go:296] duration metric: took 175.98281ms for postStartSetup
	I1124 03:12:13.349757  657716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:13.349803  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.372668  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.477580  657716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:13.483572  657716 fix.go:56] duration metric: took 6.891356705s for fixHost
	I1124 03:12:13.483602  657716 start.go:83] releasing machines lock for "no-preload-603010", held for 6.891418388s
	I1124 03:12:13.483679  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:13.509057  657716 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:13.509123  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.509169  657716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:13.509281  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.533830  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.535423  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.716640  657716 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:13.727633  657716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:13.784701  657716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:13.789877  657716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:13.789964  657716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:13.799956  657716 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:12:13.799989  657716 start.go:496] detecting cgroup driver to use...
	I1124 03:12:13.800021  657716 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:13.800080  657716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:13.821650  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:13.845364  657716 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:13.845437  657716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:13.876223  657716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:13.896810  657716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:14.018144  657716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:14.133192  657716 docker.go:234] disabling docker service ...
	I1124 03:12:14.133276  657716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:14.151812  657716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:14.167561  657716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:14.282838  657716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:14.401610  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:14.417930  657716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:14.437107  657716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:14.437170  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.449631  657716 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:14.449698  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.462463  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.477641  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.490417  657716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:14.504273  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.516484  657716 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.526509  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.538280  657716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:14.546998  657716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:14.555574  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:14.685636  657716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:14.944749  657716 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:14.944917  657716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:14.950036  657716 start.go:564] Will wait 60s for crictl version
	I1124 03:12:14.950115  657716 ssh_runner.go:195] Run: which crictl
	I1124 03:12:14.954328  657716 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:14.985292  657716 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:14.985374  657716 ssh_runner.go:195] Run: crio --version
	I1124 03:12:15.030503  657716 ssh_runner.go:195] Run: crio --version
	I1124 03:12:15.075694  657716 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:12:15.076822  657716 cli_runner.go:164] Run: docker network inspect no-preload-603010 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:15.102488  657716 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:15.108702  657716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:15.124431  657716 kubeadm.go:884] updating cluster {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:15.124588  657716 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:15.124636  657716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:15.167486  657716 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:15.167521  657716 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:15.167539  657716 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 03:12:15.167821  657716 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-603010 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:15.167925  657716 ssh_runner.go:195] Run: crio config
	I1124 03:12:15.235069  657716 cni.go:84] Creating CNI manager for ""
	I1124 03:12:15.235092  657716 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:15.235110  657716 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:15.235137  657716 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603010 NodeName:no-preload-603010 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:15.235315  657716 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603010"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:15.235402  657716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:15.246426  657716 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:15.246486  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:15.255073  657716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:12:15.274174  657716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:15.291964  657716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1124 03:12:15.310704  657716 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:15.315241  657716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:15.329049  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:15.444004  657716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:15.468249  657716 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010 for IP: 192.168.85.2
	I1124 03:12:15.468275  657716 certs.go:195] generating shared ca certs ...
	I1124 03:12:15.468303  657716 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:15.468461  657716 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:15.468527  657716 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:15.468545  657716 certs.go:257] generating profile certs ...
	I1124 03:12:15.468671  657716 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key
	I1124 03:12:15.468756  657716 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738
	I1124 03:12:15.468820  657716 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key
	I1124 03:12:15.469056  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:15.469155  657716 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:15.469190  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:15.469235  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:15.469307  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:15.469360  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:15.469452  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:15.470423  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:15.492954  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:15.516840  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:15.539720  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:15.572434  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:12:15.602383  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:15.627969  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:15.650700  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:12:15.671263  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:15.692710  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:15.715510  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:15.740163  657716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:15.756242  657716 ssh_runner.go:195] Run: openssl version
	I1124 03:12:15.764455  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:15.774930  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.779615  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.779675  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.837760  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:15.848860  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:15.859402  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.864242  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.864304  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.923088  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:15.933908  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:15.944242  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:15.949198  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:15.949248  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:16.007273  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:16.018117  657716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:16.023108  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:12:16.086212  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:12:16.144287  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:12:16.203439  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:12:16.267980  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:12:16.329154  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:12:16.391972  657716 kubeadm.go:401] StartCluster: {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:16.392083  657716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:16.392153  657716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:16.431895  657716 cri.go:89] found id: "3dc1c0625a30c329549c40b7e0c43d1ab4d818ccc0fb85ea668066af23a327d3"
	I1124 03:12:16.431924  657716 cri.go:89] found id: "4e8e84f339bed2139577a34b2e6715ab1a8d9a3b425a16886245d61781115cf0"
	I1124 03:12:16.431930  657716 cri.go:89] found id: "7294ac6825ca8afcd936803374aa461bcb4d637ea8743600784037c5acb225b2"
	I1124 03:12:16.431934  657716 cri.go:89] found id: "767a9908e75937581bb6f9fd527e760a401658cde3d3cf28bc2b66613eab1027"
	I1124 03:12:16.431938  657716 cri.go:89] found id: ""
	I1124 03:12:16.431989  657716 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:12:16.448469  657716 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:16Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:16.448636  657716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:16.460046  657716 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:12:16.460066  657716 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:12:16.460159  657716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:12:16.470578  657716 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:12:16.472039  657716 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-603010" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:16.472691  657716 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-603010" cluster setting kubeconfig missing "no-preload-603010" context setting]
	I1124 03:12:16.473827  657716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.476388  657716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:12:16.491280  657716 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 03:12:16.491307  657716 kubeadm.go:602] duration metric: took 31.234841ms to restartPrimaryControlPlane
	I1124 03:12:16.491317  657716 kubeadm.go:403] duration metric: took 99.357197ms to StartCluster
	I1124 03:12:16.491333  657716 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.491393  657716 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:16.492731  657716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.492990  657716 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:16.493291  657716 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:16.493352  657716 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:16.493441  657716 addons.go:70] Setting storage-provisioner=true in profile "no-preload-603010"
	I1124 03:12:16.493465  657716 addons.go:239] Setting addon storage-provisioner=true in "no-preload-603010"
	W1124 03:12:16.493473  657716 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:12:16.493503  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.494027  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.494266  657716 addons.go:70] Setting dashboard=true in profile "no-preload-603010"
	I1124 03:12:16.494322  657716 addons.go:239] Setting addon dashboard=true in "no-preload-603010"
	I1124 03:12:16.494338  657716 addons.go:70] Setting default-storageclass=true in profile "no-preload-603010"
	I1124 03:12:16.494434  657716 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603010"
	W1124 03:12:16.494361  657716 addons.go:248] addon dashboard should already be in state true
	I1124 03:12:16.494570  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.494863  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.495005  657716 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:16.495647  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.496468  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:16.527269  657716 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:12:16.528480  657716 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:12:16.528517  657716 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1124 03:12:14.168310  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:16.172923  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:18.176795  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:14.828319  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:12:14.828372  656542 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:12:14.828432  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.858092  656542 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:14.858118  656542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:14.858192  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.865650  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.866433  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.895242  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.975501  656542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:14.992389  656542 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:12:15.008151  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:15.016186  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:12:15.016211  656542 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:12:15.031574  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:15.042522  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:12:15.042540  656542 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:12:15.074331  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:12:15.074365  656542 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:12:15.109090  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:12:15.109113  656542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:12:15.128161  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:12:15.128184  656542 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:12:15.147874  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:12:15.147903  656542 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:12:15.168191  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:12:15.168211  656542 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:12:15.185637  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:12:15.185661  656542 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:12:15.202994  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:15.203016  656542 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:12:15.221608  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:17.996962  656542 node_ready.go:49] node "default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:17.997067  656542 node_ready.go:38] duration metric: took 3.004589581s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:12:17.997096  656542 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:17.997184  656542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:18.834613  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.826385361s)
	I1124 03:12:18.834690  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.803092411s)
	I1124 03:12:18.834853  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.613213665s)
	I1124 03:12:18.834988  656542 api_server.go:72] duration metric: took 4.047778988s to wait for apiserver process to appear ...
	I1124 03:12:18.835771  656542 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:18.835800  656542 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:12:18.838614  656542 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993813 addons enable metrics-server
	
	I1124 03:12:18.844882  656542 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 03:12:17.043130  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:12:17.043165  658811 ubuntu.go:182] provisioning hostname "embed-certs-284604"
	I1124 03:12:17.043247  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.069679  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:17.070109  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:17.070142  658811 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-284604 && echo "embed-certs-284604" | sudo tee /etc/hostname
	I1124 03:12:17.259114  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:12:17.259199  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.284082  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:17.284399  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:17.284433  658811 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-284604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-284604/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-284604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:17.452374  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:17.452411  658811 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:17.452438  658811 ubuntu.go:190] setting up certificates
	I1124 03:12:17.452452  658811 provision.go:84] configureAuth start
	I1124 03:12:17.452521  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:17.483434  658811 provision.go:143] copyHostCerts
	I1124 03:12:17.483502  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:17.483519  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:17.483580  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:17.483712  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:17.483725  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:17.483764  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:17.483851  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:17.483858  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:17.483909  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:17.483990  658811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.embed-certs-284604 san=[127.0.0.1 192.168.94.2 embed-certs-284604 localhost minikube]
	I1124 03:12:17.911206  658811 provision.go:177] copyRemoteCerts
	I1124 03:12:17.911335  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:17.911394  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.943914  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.069938  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:18.098447  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:12:18.124997  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:12:18.162531  658811 provision.go:87] duration metric: took 710.055135ms to configureAuth
	I1124 03:12:18.162560  658811 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:18.162764  658811 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:18.162877  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.187248  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:18.187553  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:18.187575  658811 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:18.557227  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:18.557257  658811 machine.go:97] duration metric: took 4.723983027s to provisionDockerMachine
	I1124 03:12:18.557270  658811 client.go:176] duration metric: took 9.315155053s to LocalClient.Create
	I1124 03:12:18.557286  658811 start.go:167] duration metric: took 9.315214435s to libmachine.API.Create "embed-certs-284604"
	I1124 03:12:18.557298  658811 start.go:293] postStartSetup for "embed-certs-284604" (driver="docker")
	I1124 03:12:18.557310  658811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:18.557379  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:18.557432  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.587404  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.715877  658811 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:18.721275  658811 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:18.721309  658811 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:18.721322  658811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:18.721381  658811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:18.721473  658811 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:18.721597  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:18.732645  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:18.763370  658811 start.go:296] duration metric: took 206.056597ms for postStartSetup
	I1124 03:12:18.763732  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:18.791899  658811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:12:18.792183  658811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:18.792233  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.820806  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.936530  658811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:18.948570  658811 start.go:128] duration metric: took 9.708372989s to createHost
	I1124 03:12:18.948686  658811 start.go:83] releasing machines lock for "embed-certs-284604", held for 9.708587492s
	I1124 03:12:18.948771  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:18.973190  658811 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:18.973375  658811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:18.973512  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.973582  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.998620  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.999698  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.845938  656542 addons.go:530] duration metric: took 4.058450553s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 03:12:18.846295  656542 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:18.846717  656542 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:12:19.335969  656542 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:12:19.342155  656542 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1124 03:12:19.343392  656542 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:19.343421  656542 api_server.go:131] duration metric: took 507.639836ms to wait for apiserver health ...
	I1124 03:12:19.343433  656542 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:19.347170  656542 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:19.347220  656542 system_pods.go:61] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:19.347233  656542 system_pods.go:61] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:19.347244  656542 system_pods.go:61] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:19.347253  656542 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:19.347263  656542 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:19.347271  656542 system_pods.go:61] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:19.347279  656542 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:19.347290  656542 system_pods.go:61] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:19.347300  656542 system_pods.go:74] duration metric: took 3.857291ms to wait for pod list to return data ...
	I1124 03:12:19.347309  656542 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:19.350005  656542 default_sa.go:45] found service account: "default"
	I1124 03:12:19.350027  656542 default_sa.go:55] duration metric: took 2.709767ms for default service account to be created ...
	I1124 03:12:19.350036  656542 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:19.354450  656542 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:19.354480  656542 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:19.354492  656542 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:19.354502  656542 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:19.354512  656542 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:19.354525  656542 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:19.354534  656542 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:19.354542  656542 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:19.354550  656542 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:19.354560  656542 system_pods.go:126] duration metric: took 4.516416ms to wait for k8s-apps to be running ...
	I1124 03:12:19.354569  656542 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:19.354617  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:19.377699  656542 system_svc.go:56] duration metric: took 23.119925ms WaitForService to wait for kubelet
	I1124 03:12:19.377726  656542 kubeadm.go:587] duration metric: took 4.590516557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:19.377808  656542 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:19.381785  656542 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:19.381815  656542 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:19.381831  656542 node_conditions.go:105] duration metric: took 4.017737ms to run NodePressure ...
	I1124 03:12:19.381846  656542 start.go:242] waiting for startup goroutines ...
	I1124 03:12:19.381857  656542 start.go:247] waiting for cluster config update ...
	I1124 03:12:19.381883  656542 start.go:256] writing updated cluster config ...
	I1124 03:12:19.382229  656542 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:19.387932  656542 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:19.394333  656542 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:16.529636  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:12:16.529826  657716 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:12:16.529877  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.529719  657716 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:16.530024  657716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:16.530070  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.534729  657716 addons.go:239] Setting addon default-storageclass=true in "no-preload-603010"
	W1124 03:12:16.534754  657716 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:12:16.534783  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.539339  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.565768  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.582397  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.585042  657716 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:16.585070  657716 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:16.585126  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.617946  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.706410  657716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:16.731745  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:12:16.731773  657716 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:12:16.736337  657716 node_ready.go:35] waiting up to 6m0s for node "no-preload-603010" to be "Ready" ...
	I1124 03:12:16.736937  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:16.758823  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:12:16.758847  657716 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:12:16.768684  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:16.788344  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:12:16.788369  657716 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:12:16.806593  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:12:16.806620  657716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:12:16.847576  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:12:16.847609  657716 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:12:16.867721  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:12:16.867755  657716 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:12:16.886765  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:12:16.886787  657716 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:12:16.907569  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:12:16.907732  657716 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:12:16.929396  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:16.929417  657716 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:12:16.958374  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:19.957067  657716 node_ready.go:49] node "no-preload-603010" is "Ready"
	I1124 03:12:19.957111  657716 node_ready.go:38] duration metric: took 3.220732108s for node "no-preload-603010" to be "Ready" ...
	I1124 03:12:19.957131  657716 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:19.957256  657716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:20.880814  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.143842388s)
	I1124 03:12:20.881241  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.112181993s)
	I1124 03:12:21.157660  657716 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.200376454s)
	I1124 03:12:21.157703  657716 api_server.go:72] duration metric: took 4.664681444s to wait for apiserver process to appear ...
	I1124 03:12:21.157713  657716 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:21.157733  657716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:12:21.158403  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.199980339s)
	I1124 03:12:21.160177  657716 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-603010 addons enable metrics-server
	
	I1124 03:12:21.161363  657716 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 03:12:19.120481  658811 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:19.211741  658811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:19.277394  658811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:19.284078  658811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:19.284149  658811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:19.319995  658811 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:12:19.320028  658811 start.go:496] detecting cgroup driver to use...
	I1124 03:12:19.320064  658811 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:19.320117  658811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:19.345823  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:19.367716  658811 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:19.367782  658811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:19.389799  658811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:19.412438  658811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:19.524730  658811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:19.637210  658811 docker.go:234] disabling docker service ...
	I1124 03:12:19.637286  658811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:19.659861  658811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:19.677152  658811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:19.823448  658811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:19.960707  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:19.981616  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:20.012418  658811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:20.012486  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.058077  658811 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:20.058214  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.074742  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.118587  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.135044  658811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:20.151861  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.172656  658811 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.194765  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.232792  658811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:20.242855  658811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:20.253417  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:20.371692  658811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:21.221343  658811 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:21.221440  658811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:21.226905  658811 start.go:564] Will wait 60s for crictl version
	I1124 03:12:21.227016  658811 ssh_runner.go:195] Run: which crictl
	I1124 03:12:21.231693  658811 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:21.262514  658811 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:21.262603  658811 ssh_runner.go:195] Run: crio --version
	I1124 03:12:21.302192  658811 ssh_runner.go:195] Run: crio --version
	I1124 03:12:21.363037  658811 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:12:21.162777  657716 addons.go:530] duration metric: took 4.669427095s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 03:12:21.163688  657716 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:21.163718  657716 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:20.668896  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:23.167980  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:21.364543  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:21.388019  658811 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:21.393290  658811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:21.406629  658811 kubeadm.go:884] updating cluster {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:21.406778  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:21.406846  658811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:21.445258  658811 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:21.445284  658811 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:12:21.445336  658811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:21.471000  658811 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:21.471025  658811 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:21.471037  658811 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:12:21.471125  658811 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-284604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:21.471186  658811 ssh_runner.go:195] Run: crio config
	I1124 03:12:21.516457  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:21.516480  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:21.516502  658811 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:21.516532  658811 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-284604 NodeName:embed-certs-284604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:21.516680  658811 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-284604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:21.516751  658811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:21.524967  658811 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:21.525035  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:21.533487  658811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 03:12:21.547228  658811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:21.640415  658811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 03:12:21.656434  658811 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:21.660696  658811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:21.674410  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:21.772584  658811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:21.798340  658811 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604 for IP: 192.168.94.2
	I1124 03:12:21.798360  658811 certs.go:195] generating shared ca certs ...
	I1124 03:12:21.798381  658811 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.798539  658811 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:21.798593  658811 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:21.798607  658811 certs.go:257] generating profile certs ...
	I1124 03:12:21.798690  658811 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key
	I1124 03:12:21.798708  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt with IP's: []
	I1124 03:12:21.837756  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt ...
	I1124 03:12:21.837790  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt: {Name:mk6d8aec213556beda470e3e5188eed1aec5e183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.838000  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key ...
	I1124 03:12:21.838030  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key: {Name:mk56f44e1d331f82a560e15fe6a3c3ca4602bba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.838172  658811 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087
	I1124 03:12:21.838189  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 03:12:21.915471  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 ...
	I1124 03:12:21.915494  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087: {Name:mk185605a13bb00cdff0decbde0063003287a88f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.915630  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087 ...
	I1124 03:12:21.915643  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087: {Name:mk1404f69a73d575873220c9d20779709c9db66b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.915715  658811 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt
	I1124 03:12:21.915784  658811 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key
	I1124 03:12:21.915837  658811 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key
	I1124 03:12:21.915852  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt with IP's: []
	I1124 03:12:22.064876  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt ...
	I1124 03:12:22.064923  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt: {Name:mk7bbfb718db4eee243d6b6658f5b6db725b34b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:22.065108  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key ...
	I1124 03:12:22.065140  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key: {Name:mk282c31a6bdbd1f185d5fa986bb6679f789f94a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:22.065488  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:22.065564  658811 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:22.065576  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:22.065602  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:22.065630  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:22.065654  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:22.065702  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:22.066383  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:22.086471  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:22.103602  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:22.120085  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:22.137488  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:12:22.154084  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:22.171055  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:22.187877  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:12:22.204407  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:22.222560  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:22.241380  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:22.258066  658811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:22.269950  658811 ssh_runner.go:195] Run: openssl version
	I1124 03:12:22.276120  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:22.283870  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.287375  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.287414  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.321400  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:22.329479  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:22.338113  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.342815  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.342865  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.384524  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:22.393408  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:22.402946  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.406951  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.407009  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.445501  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:22.454521  658811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:22.458152  658811 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:12:22.458212  658811 kubeadm.go:401] StartCluster: {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:22.458278  658811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:22.458330  658811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:22.487574  658811 cri.go:89] found id: ""
	I1124 03:12:22.487653  658811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:22.495876  658811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:12:22.505058  658811 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:12:22.505121  658811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:12:22.515162  658811 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:12:22.515181  658811 kubeadm.go:158] found existing configuration files:
	
	I1124 03:12:22.515229  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:12:22.525864  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:12:22.525956  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:12:22.535632  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:12:22.545975  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:12:22.546068  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:12:22.556144  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:12:22.566062  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:12:22.566123  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:12:22.576364  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:12:22.587041  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:12:22.587089  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:12:22.596656  658811 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:12:22.678370  658811 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:12:22.762592  658811 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1124 03:12:21.400229  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:23.400859  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:21.658606  657716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:12:21.664294  657716 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:12:21.665654  657716 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:21.665685  657716 api_server.go:131] duration metric: took 507.965368ms to wait for apiserver health ...
	I1124 03:12:21.665696  657716 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:21.669523  657716 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:21.669569  657716 system_pods.go:61] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:21.669584  657716 system_pods.go:61] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:21.669600  657716 system_pods.go:61] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:21.669613  657716 system_pods.go:61] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:21.669620  657716 system_pods.go:61] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:21.669631  657716 system_pods.go:61] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:21.669640  657716 system_pods.go:61] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:21.669651  657716 system_pods.go:61] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:21.669661  657716 system_pods.go:74] duration metric: took 3.958242ms to wait for pod list to return data ...
	I1124 03:12:21.669744  657716 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:21.672641  657716 default_sa.go:45] found service account: "default"
	I1124 03:12:21.672665  657716 default_sa.go:55] duration metric: took 2.912794ms for default service account to be created ...
	I1124 03:12:21.672674  657716 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:21.676337  657716 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:21.676367  657716 system_pods.go:89] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:21.676379  657716 system_pods.go:89] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:21.676394  657716 system_pods.go:89] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:21.676403  657716 system_pods.go:89] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:21.676411  657716 system_pods.go:89] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:21.676422  657716 system_pods.go:89] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:21.676433  657716 system_pods.go:89] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:21.676441  657716 system_pods.go:89] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:21.676450  657716 system_pods.go:126] duration metric: took 3.770261ms to wait for k8s-apps to be running ...
	I1124 03:12:21.676459  657716 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:21.676504  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:21.690659  657716 system_svc.go:56] duration metric: took 14.192089ms WaitForService to wait for kubelet
	I1124 03:12:21.690686  657716 kubeadm.go:587] duration metric: took 5.197662584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:21.690707  657716 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:21.693136  657716 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:21.693164  657716 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:21.693184  657716 node_conditions.go:105] duration metric: took 2.469957ms to run NodePressure ...
	I1124 03:12:21.693203  657716 start.go:242] waiting for startup goroutines ...
	I1124 03:12:21.693215  657716 start.go:247] waiting for cluster config update ...
	I1124 03:12:21.693239  657716 start.go:256] writing updated cluster config ...
	I1124 03:12:21.693532  657716 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:21.697901  657716 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:21.701025  657716 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:12:23.706826  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:25.707596  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:25.168947  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:27.669069  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:25.402048  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:27.901054  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:27.707794  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:29.710379  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:29.675678  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:32.166267  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:34.784594  658811 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:12:34.784648  658811 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:12:34.784736  658811 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:12:34.784810  658811 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:12:34.784870  658811 kubeadm.go:319] OS: Linux
	I1124 03:12:34.784983  658811 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:12:34.785059  658811 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:12:34.785107  658811 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:12:34.785166  658811 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:12:34.785237  658811 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:12:34.785303  658811 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:12:34.785372  658811 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:12:34.785441  658811 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:12:34.785518  658811 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:12:34.785647  658811 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:12:34.785738  658811 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:12:34.785806  658811 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:12:34.786978  658811 out.go:252]   - Generating certificates and keys ...
	I1124 03:12:34.787057  658811 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:12:34.787166  658811 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:12:34.787260  658811 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:12:34.787314  658811 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:12:34.787380  658811 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:12:34.787463  658811 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:12:34.787510  658811 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:12:34.787654  658811 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-284604 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:12:34.787713  658811 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:12:34.787835  658811 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-284604 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:12:34.787929  658811 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:12:34.787996  658811 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:12:34.788075  658811 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:12:34.788161  658811 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:12:34.788246  658811 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:12:34.788307  658811 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:12:34.788377  658811 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:12:34.788464  658811 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:12:34.788510  658811 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:12:34.788574  658811 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:12:34.788677  658811 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:12:34.789842  658811 out.go:252]   - Booting up control plane ...
	I1124 03:12:34.789955  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:12:34.790029  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:12:34.790102  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:12:34.790202  658811 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:12:34.790286  658811 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:12:34.790369  658811 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:12:34.790438  658811 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:12:34.790470  658811 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:12:34.790573  658811 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:12:34.790662  658811 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:12:34.790715  658811 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001939634s
	I1124 03:12:34.790808  658811 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:12:34.790874  658811 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1124 03:12:34.790987  658811 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:12:34.791057  658811 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:12:34.791109  658811 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.83516238s
	I1124 03:12:34.791172  658811 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.120221493s
	I1124 03:12:34.791231  658811 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501624476s
	I1124 03:12:34.791319  658811 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:12:34.791443  658811 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:12:34.791516  658811 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:12:34.791778  658811 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-284604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:12:34.791865  658811 kubeadm.go:319] [bootstrap-token] Using token: 6opk0j.95uwfc60sd8szhpc
	I1124 03:12:34.793026  658811 out.go:252]   - Configuring RBAC rules ...
	I1124 03:12:34.793125  658811 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:12:34.793213  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:12:34.793344  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:12:34.793455  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:12:34.793557  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:12:34.793642  658811 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:12:34.793774  658811 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:12:34.793810  658811 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:12:34.793851  658811 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:12:34.793857  658811 kubeadm.go:319] 
	I1124 03:12:34.793964  658811 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:12:34.793973  658811 kubeadm.go:319] 
	I1124 03:12:34.794046  658811 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:12:34.794053  658811 kubeadm.go:319] 
	I1124 03:12:34.794074  658811 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:12:34.794151  658811 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:12:34.794229  658811 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:12:34.794239  658811 kubeadm.go:319] 
	I1124 03:12:34.794318  658811 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:12:34.794327  658811 kubeadm.go:319] 
	I1124 03:12:34.794375  658811 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:12:34.794381  658811 kubeadm.go:319] 
	I1124 03:12:34.794424  658811 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:12:34.794490  658811 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:12:34.794554  658811 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:12:34.794560  658811 kubeadm.go:319] 
	I1124 03:12:34.794633  658811 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:12:34.794705  658811 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:12:34.794712  658811 kubeadm.go:319] 
	I1124 03:12:34.794781  658811 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6opk0j.95uwfc60sd8szhpc \
	I1124 03:12:34.794955  658811 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:12:34.794990  658811 kubeadm.go:319] 	--control-plane 
	I1124 03:12:34.794996  658811 kubeadm.go:319] 
	I1124 03:12:34.795133  658811 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:12:34.795142  658811 kubeadm.go:319] 
	I1124 03:12:34.795208  658811 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6opk0j.95uwfc60sd8szhpc \
	I1124 03:12:34.795304  658811 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:12:34.795316  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:34.795322  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:34.796503  658811 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1124 03:12:29.901574  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:32.399665  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:32.206353  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:34.206828  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:34.667383  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:35.167626  650744 pod_ready.go:94] pod "coredns-5dd5756b68-5nwx9" is "Ready"
	I1124 03:12:35.167652  650744 pod_ready.go:86] duration metric: took 36.006547637s for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.170471  650744 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.174915  650744 pod_ready.go:94] pod "etcd-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.174952  650744 pod_ready.go:86] duration metric: took 4.460425ms for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.178276  650744 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.181797  650744 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.181815  650744 pod_ready.go:86] duration metric: took 3.521385ms for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.184086  650744 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.364640  650744 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.364666  650744 pod_ready.go:86] duration metric: took 180.561055ms for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.566321  650744 pod_ready.go:83] waiting for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.965760  650744 pod_ready.go:94] pod "kube-proxy-r82jh" is "Ready"
	I1124 03:12:35.965786  650744 pod_ready.go:86] duration metric: took 399.441601ms for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.166112  650744 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.564858  650744 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-579951" is "Ready"
	I1124 03:12:36.564911  650744 pod_ready.go:86] duration metric: took 398.774389ms for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.564927  650744 pod_ready.go:40] duration metric: took 37.40842222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:36.606666  650744 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 03:12:36.609650  650744 out.go:203] 
	W1124 03:12:36.610839  650744 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 03:12:36.611943  650744 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:12:36.613009  650744 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-579951" cluster and "default" namespace by default
	I1124 03:12:34.797545  658811 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:12:34.801904  658811 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:12:34.801919  658811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:12:34.815659  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:12:35.008985  658811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:12:35.009118  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-284604 minikube.k8s.io/updated_at=2025_11_24T03_12_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=embed-certs-284604 minikube.k8s.io/primary=true
	I1124 03:12:35.009137  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:35.019423  658811 ops.go:34] apiserver oom_adj: -16
	I1124 03:12:35.098937  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:35.600025  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:36.099882  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:36.599914  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:37.099714  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:37.599861  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:38.098989  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:38.599248  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.099379  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.599598  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.664570  658811 kubeadm.go:1114] duration metric: took 4.655535544s to wait for elevateKubeSystemPrivileges
	I1124 03:12:39.664621  658811 kubeadm.go:403] duration metric: took 17.206413974s to StartCluster
	I1124 03:12:39.664642  658811 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:39.664720  658811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:39.666858  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:39.667137  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:12:39.667148  658811 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:39.667230  658811 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:39.667331  658811 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-284604"
	I1124 03:12:39.667356  658811 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-284604"
	I1124 03:12:39.667360  658811 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:39.667396  658811 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:12:39.667427  658811 addons.go:70] Setting default-storageclass=true in profile "embed-certs-284604"
	I1124 03:12:39.667451  658811 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-284604"
	I1124 03:12:39.667810  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.667990  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.668614  658811 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:39.670239  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:39.693324  658811 addons.go:239] Setting addon default-storageclass=true in "embed-certs-284604"
	I1124 03:12:39.693377  658811 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:12:39.693617  658811 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 03:12:34.900232  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:36.901987  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:39.399311  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:39.693843  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.695301  658811 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:39.695324  658811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:39.695401  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:39.723273  658811 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:39.723298  658811 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:39.723378  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:39.730678  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:39.746663  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:39.790082  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:12:39.807223  658811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:39.854663  658811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:39.859938  658811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:39.988561  658811 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 03:12:39.990213  658811 node_ready.go:35] waiting up to 6m0s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:12:40.170444  658811 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1124 03:12:36.707151  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:39.206261  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:41.206507  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	I1124 03:12:40.171595  658811 addons.go:530] duration metric: took 504.363947ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:12:40.492653  658811 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-284604" context rescaled to 1 replicas
	W1124 03:12:41.992667  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:43.993353  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:41.399566  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:43.899302  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:43.705614  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:45.706618  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:45.993493  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:47.993708  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:46.399440  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:48.399607  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:48.205812  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:50.206724  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:50.493353  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	I1124 03:12:50.993323  658811 node_ready.go:49] node "embed-certs-284604" is "Ready"
	I1124 03:12:50.993350  658811 node_ready.go:38] duration metric: took 11.003110454s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:12:50.993367  658811 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:50.993411  658811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:51.005273  658811 api_server.go:72] duration metric: took 11.338089025s to wait for apiserver process to appear ...
	I1124 03:12:51.005299  658811 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:51.005319  658811 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:12:51.010460  658811 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:12:51.011346  658811 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:51.011367  658811 api_server.go:131] duration metric: took 6.06186ms to wait for apiserver health ...
	I1124 03:12:51.011376  658811 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:51.014056  658811 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:51.014084  658811 system_pods.go:61] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.014092  658811 system_pods.go:61] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.014101  658811 system_pods.go:61] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.014106  658811 system_pods.go:61] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.014113  658811 system_pods.go:61] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.014119  658811 system_pods.go:61] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.014136  658811 system_pods.go:61] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.014147  658811 system_pods.go:61] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.014155  658811 system_pods.go:74] duration metric: took 2.773001ms to wait for pod list to return data ...
	I1124 03:12:51.014164  658811 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:51.016349  658811 default_sa.go:45] found service account: "default"
	I1124 03:12:51.016366  658811 default_sa.go:55] duration metric: took 2.196577ms for default service account to be created ...
	I1124 03:12:51.016373  658811 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:51.018741  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.018763  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.018768  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.018774  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.018778  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.018783  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.018787  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.018791  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.018798  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.018817  658811 retry.go:31] will retry after 267.963041ms: missing components: kube-dns
	I1124 03:12:51.291183  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.291223  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.291231  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.291239  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.291244  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.291250  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.291255  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.291260  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.291268  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.291295  658811 retry.go:31] will retry after 316.287047ms: missing components: kube-dns
	I1124 03:12:51.610985  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.611019  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.611026  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.611037  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.611045  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.611055  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.611061  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.611066  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.611074  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.611098  658811 retry.go:31] will retry after 440.03042ms: missing components: kube-dns
	I1124 03:12:52.054793  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:52.054821  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:52.054826  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:52.054831  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:52.054835  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:52.054839  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:52.054842  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:52.054845  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:52.054850  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:52.054863  658811 retry.go:31] will retry after 498.386661ms: missing components: kube-dns
	I1124 03:12:52.557040  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:52.557071  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Running
	I1124 03:12:52.557079  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:52.557084  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:52.557089  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:52.557095  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:52.557100  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:52.557104  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:52.557110  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Running
	I1124 03:12:52.557120  658811 system_pods.go:126] duration metric: took 1.540739928s to wait for k8s-apps to be running ...
	I1124 03:12:52.557134  658811 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:52.557188  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:52.570482  658811 system_svc.go:56] duration metric: took 13.341226ms WaitForService to wait for kubelet
	I1124 03:12:52.570511  658811 kubeadm.go:587] duration metric: took 12.903331916s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:52.570535  658811 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:52.573089  658811 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:52.573117  658811 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:52.573148  658811 node_conditions.go:105] duration metric: took 2.605161ms to run NodePressure ...
	I1124 03:12:52.573166  658811 start.go:242] waiting for startup goroutines ...
	I1124 03:12:52.573175  658811 start.go:247] waiting for cluster config update ...
	I1124 03:12:52.573187  658811 start.go:256] writing updated cluster config ...
	I1124 03:12:52.573408  658811 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:52.576899  658811 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:52.580189  658811 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-89mzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.584242  658811 pod_ready.go:94] pod "coredns-66bc5c9577-89mzc" is "Ready"
	I1124 03:12:52.584262  658811 pod_ready.go:86] duration metric: took 4.045428ms for pod "coredns-66bc5c9577-89mzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.586066  658811 pod_ready.go:83] waiting for pod "etcd-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.590045  658811 pod_ready.go:94] pod "etcd-embed-certs-284604" is "Ready"
	I1124 03:12:52.590064  658811 pod_ready.go:86] duration metric: took 3.981268ms for pod "etcd-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.592126  658811 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.595532  658811 pod_ready.go:94] pod "kube-apiserver-embed-certs-284604" is "Ready"
	I1124 03:12:52.595555  658811 pod_ready.go:86] duration metric: took 3.408619ms for pod "kube-apiserver-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.597386  658811 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.980512  658811 pod_ready.go:94] pod "kube-controller-manager-embed-certs-284604" is "Ready"
	I1124 03:12:52.980538  658811 pod_ready.go:86] duration metric: took 383.129867ms for pod "kube-controller-manager-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.181479  658811 pod_ready.go:83] waiting for pod "kube-proxy-bn8fd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.581552  658811 pod_ready.go:94] pod "kube-proxy-bn8fd" is "Ready"
	I1124 03:12:53.581575  658811 pod_ready.go:86] duration metric: took 400.07394ms for pod "kube-proxy-bn8fd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.781409  658811 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.181669  658811 pod_ready.go:94] pod "kube-scheduler-embed-certs-284604" is "Ready"
	I1124 03:12:54.181696  658811 pod_ready.go:86] duration metric: took 400.263506ms for pod "kube-scheduler-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.181712  658811 pod_ready.go:40] duration metric: took 1.604781402s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:54.228480  658811 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:54.231260  658811 out.go:179] * Done! kubectl is now configured to use "embed-certs-284604" cluster and "default" namespace by default
	W1124 03:12:50.399926  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:52.400576  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:52.900171  656542 pod_ready.go:94] pod "coredns-66bc5c9577-w62hm" is "Ready"
	I1124 03:12:52.900193  656542 pod_ready.go:86] duration metric: took 33.505834176s for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.903110  656542 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.907513  656542 pod_ready.go:94] pod "etcd-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:52.907539  656542 pod_ready.go:86] duration metric: took 4.401311ms for pod "etcd-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.909400  656542 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.913156  656542 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:52.913178  656542 pod_ready.go:86] duration metric: took 3.755745ms for pod "kube-apiserver-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.914951  656542 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.098380  656542 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:53.098409  656542 pod_ready.go:86] duration metric: took 183.435612ms for pod "kube-controller-manager-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.298588  656542 pod_ready.go:83] waiting for pod "kube-proxy-xgjzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.698811  656542 pod_ready.go:94] pod "kube-proxy-xgjzs" is "Ready"
	I1124 03:12:53.698835  656542 pod_ready.go:86] duration metric: took 400.225655ms for pod "kube-proxy-xgjzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.898023  656542 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.299083  656542 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:54.299107  656542 pod_ready.go:86] duration metric: took 401.0576ms for pod "kube-scheduler-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.299119  656542 pod_ready.go:40] duration metric: took 34.911155437s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:54.345901  656542 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:54.347541  656542 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-993813" cluster and "default" namespace by default
	W1124 03:12:52.208247  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:54.707505  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	I1124 03:12:56.206822  657716 pod_ready.go:94] pod "coredns-66bc5c9577-9n5xf" is "Ready"
	I1124 03:12:56.206857  657716 pod_ready.go:86] duration metric: took 34.50580389s for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.209449  657716 pod_ready.go:83] waiting for pod "etcd-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.213288  657716 pod_ready.go:94] pod "etcd-no-preload-603010" is "Ready"
	I1124 03:12:56.213310  657716 pod_ready.go:86] duration metric: took 3.839555ms for pod "etcd-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.215450  657716 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.219181  657716 pod_ready.go:94] pod "kube-apiserver-no-preload-603010" is "Ready"
	I1124 03:12:56.219201  657716 pod_ready.go:86] duration metric: took 3.726981ms for pod "kube-apiserver-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.221198  657716 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.404873  657716 pod_ready.go:94] pod "kube-controller-manager-no-preload-603010" is "Ready"
	I1124 03:12:56.404930  657716 pod_ready.go:86] duration metric: took 183.709106ms for pod "kube-controller-manager-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.605567  657716 pod_ready.go:83] waiting for pod "kube-proxy-swj6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.005571  657716 pod_ready.go:94] pod "kube-proxy-swj6c" is "Ready"
	I1124 03:12:57.005598  657716 pod_ready.go:86] duration metric: took 400.0046ms for pod "kube-proxy-swj6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.205842  657716 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.605312  657716 pod_ready.go:94] pod "kube-scheduler-no-preload-603010" is "Ready"
	I1124 03:12:57.605336  657716 pod_ready.go:86] duration metric: took 399.465818ms for pod "kube-scheduler-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.605349  657716 pod_ready.go:40] duration metric: took 35.907419342s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:57.646839  657716 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:57.648681  657716 out.go:179] * Done! kubectl is now configured to use "no-preload-603010" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 03:12:51 embed-certs-284604 crio[767]: time="2025-11-24T03:12:51.138900737Z" level=info msg="Starting container: 8c66043dd4cf57893aeffdcb75060eaab8a509d43b6fe4ef3a9ea4dfdc53f281" id=636e9b25-0104-4b27-abce-3422a6429e24 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:12:51 embed-certs-284604 crio[767]: time="2025-11-24T03:12:51.141679359Z" level=info msg="Started container" PID=1834 containerID=8c66043dd4cf57893aeffdcb75060eaab8a509d43b6fe4ef3a9ea4dfdc53f281 description=kube-system/coredns-66bc5c9577-89mzc/coredns id=636e9b25-0104-4b27-abce-3422a6429e24 name=/runtime.v1.RuntimeService/StartContainer sandboxID=95f615c4dd86aebce925237f9c85fe642cac52ff3eb875f64e64d25f56c68f6d
	Nov 24 03:12:54 embed-certs-284604 crio[767]: time="2025-11-24T03:12:54.687416415Z" level=info msg="Running pod sandbox: default/busybox/POD" id=39362529-c4ad-45fb-ac07-e36e0412c8b7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:12:54 embed-certs-284604 crio[767]: time="2025-11-24T03:12:54.68750575Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:54 embed-certs-284604 crio[767]: time="2025-11-24T03:12:54.692335174Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:806e7d151f6ff476824a4a82cf4ca69c55fd89fdd501a6e4d06e638f51eda1ad UID:84f9c221-0f52-448e-88a0-6d2e90c436b2 NetNS:/var/run/netns/1b5c381d-9fd8-4299-9f97-94a064082a4e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008abd0}] Aliases:map[]}"
	Nov 24 03:12:54 embed-certs-284604 crio[767]: time="2025-11-24T03:12:54.69236391Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 03:12:54 embed-certs-284604 crio[767]: time="2025-11-24T03:12:54.711178973Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:806e7d151f6ff476824a4a82cf4ca69c55fd89fdd501a6e4d06e638f51eda1ad UID:84f9c221-0f52-448e-88a0-6d2e90c436b2 NetNS:/var/run/netns/1b5c381d-9fd8-4299-9f97-94a064082a4e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008abd0}] Aliases:map[]}"
	Nov 24 03:12:54 embed-certs-284604 crio[767]: time="2025-11-24T03:12:54.711302818Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 03:12:54 embed-certs-284604 crio[767]: time="2025-11-24T03:12:54.712022781Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 03:12:54 embed-certs-284604 crio[767]: time="2025-11-24T03:12:54.712789832Z" level=info msg="Ran pod sandbox 806e7d151f6ff476824a4a82cf4ca69c55fd89fdd501a6e4d06e638f51eda1ad with infra container: default/busybox/POD" id=39362529-c4ad-45fb-ac07-e36e0412c8b7 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 03:12:54 embed-certs-284604 crio[767]: time="2025-11-24T03:12:54.714070324Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=59c46fc3-d641-421c-8613-5fc2a95c877e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:54 embed-certs-284604 crio[767]: time="2025-11-24T03:12:54.714208106Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=59c46fc3-d641-421c-8613-5fc2a95c877e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:54 embed-certs-284604 crio[767]: time="2025-11-24T03:12:54.714248439Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=59c46fc3-d641-421c-8613-5fc2a95c877e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:54 embed-certs-284604 crio[767]: time="2025-11-24T03:12:54.714980792Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7ca5a61a-8dea-4fd1-b7c9-782e2bf5bd4f name=/runtime.v1.ImageService/PullImage
	Nov 24 03:12:54 embed-certs-284604 crio[767]: time="2025-11-24T03:12:54.718745903Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:12:55 embed-certs-284604 crio[767]: time="2025-11-24T03:12:55.451470293Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=7ca5a61a-8dea-4fd1-b7c9-782e2bf5bd4f name=/runtime.v1.ImageService/PullImage
	Nov 24 03:12:55 embed-certs-284604 crio[767]: time="2025-11-24T03:12:55.452153176Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=186c2cfa-4b5c-43f9-b515-383b75a551e9 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:55 embed-certs-284604 crio[767]: time="2025-11-24T03:12:55.453366473Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=3897d28b-4179-45e5-95f1-f1bf186fbc29 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:55 embed-certs-284604 crio[767]: time="2025-11-24T03:12:55.456470858Z" level=info msg="Creating container: default/busybox/busybox" id=32274560-8ef1-40e5-a880-348bc5b1f5ba name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:55 embed-certs-284604 crio[767]: time="2025-11-24T03:12:55.456590489Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:55 embed-certs-284604 crio[767]: time="2025-11-24T03:12:55.461190422Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:55 embed-certs-284604 crio[767]: time="2025-11-24T03:12:55.461621589Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:55 embed-certs-284604 crio[767]: time="2025-11-24T03:12:55.485954863Z" level=info msg="Created container db920394b0997e31b5a90dc080647d025c9dcdf49bd6f623b5d91278c3dc7742: default/busybox/busybox" id=32274560-8ef1-40e5-a880-348bc5b1f5ba name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:55 embed-certs-284604 crio[767]: time="2025-11-24T03:12:55.48647701Z" level=info msg="Starting container: db920394b0997e31b5a90dc080647d025c9dcdf49bd6f623b5d91278c3dc7742" id=003ff171-73ee-4b2b-bf8b-3b06c5b569b6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:12:55 embed-certs-284604 crio[767]: time="2025-11-24T03:12:55.488074051Z" level=info msg="Started container" PID=1916 containerID=db920394b0997e31b5a90dc080647d025c9dcdf49bd6f623b5d91278c3dc7742 description=default/busybox/busybox id=003ff171-73ee-4b2b-bf8b-3b06c5b569b6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=806e7d151f6ff476824a4a82cf4ca69c55fd89fdd501a6e4d06e638f51eda1ad
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	db920394b0997       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   806e7d151f6ff       busybox                                      default
	8c66043dd4cf5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   95f615c4dd86a       coredns-66bc5c9577-89mzc                     kube-system
	4ff0beba7e76a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   1c5468cd3d8d8       storage-provisioner                          kube-system
	379e92852faaa       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   1a6c722742407       kindnet-7tbg8                                kube-system
	ce60d0b8ed8a2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   b7d56e4c703e7       kube-proxy-bn8fd                             kube-system
	37a9f44f3cce5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   c51caa850894e       kube-scheduler-embed-certs-284604            kube-system
	2573fe863cfa1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   c31597b078b80       kube-apiserver-embed-certs-284604            kube-system
	0fdd7c0e0f35d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   6ca4450e73784       kube-controller-manager-embed-certs-284604   kube-system
	160f378f3486e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   b318adae48550       etcd-embed-certs-284604                      kube-system
	
	
	==> coredns [8c66043dd4cf57893aeffdcb75060eaab8a509d43b6fe4ef3a9ea4dfdc53f281] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53245 - 40114 "HINFO IN 6063367151770502367.944854858772612538. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.516111544s
	
	
	==> describe nodes <==
	Name:               embed-certs-284604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-284604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=embed-certs-284604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_12_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:12:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-284604
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:12:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:12:50 +0000   Mon, 24 Nov 2025 03:12:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:12:50 +0000   Mon, 24 Nov 2025 03:12:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:12:50 +0000   Mon, 24 Nov 2025 03:12:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:12:50 +0000   Mon, 24 Nov 2025 03:12:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-284604
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                069cc4ec-f604-4b4c-a3d4-6c93aa172617
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-89mzc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-embed-certs-284604                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-7tbg8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-284604             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-284604    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-bn8fd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-284604             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node embed-certs-284604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node embed-certs-284604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node embed-certs-284604 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node embed-certs-284604 event: Registered Node embed-certs-284604 in Controller
	  Normal  NodeReady                13s   kubelet          Node embed-certs-284604 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [160f378f3486e4957414d5d7a90cc48dc67f73c04d867e1defd941bb666cdef2] <==
	{"level":"warn","ts":"2025-11-24T03:12:30.852762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.858949Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.865260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.874271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.881131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.887460Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.895528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.903285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.910283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.917780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.926272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.933907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.942305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.949671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.956354Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.973694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.979689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:30.987126Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:31.003428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59120","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:31.010568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:31.018044Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:31.025756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:31.050621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:31.058417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:31.121755Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59240","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:13:03 up  1:55,  0 user,  load average: 4.62, 4.15, 2.71
	Linux embed-certs-284604 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [379e92852faaaec2001597e3fa5b667e8016115a8065e91430639046dda60331] <==
	I1124 03:12:40.401051       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:12:40.401299       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 03:12:40.401417       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:12:40.401432       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:12:40.401453       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:12:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:12:40.604443       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:12:40.604498       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:12:40.604513       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:12:40.696518       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:12:41.096301       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:12:41.096336       1 metrics.go:72] Registering metrics
	I1124 03:12:41.096433       1 controller.go:711] "Syncing nftables rules"
	I1124 03:12:50.607645       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:12:50.607697       1 main.go:301] handling current node
	I1124 03:13:00.604871       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:13:00.604928       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2573fe863cfa150aa8a4a50d6d4712165289de4e2cd8a3a94d823f18174aeb88] <==
	E1124 03:12:31.793494       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1124 03:12:31.843452       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:12:31.843606       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:12:31.844599       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:12:31.851696       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:12:31.851772       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:12:31.946142       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:12:32.643672       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:12:32.647377       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:12:32.647394       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:12:33.076429       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:12:33.107536       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:12:33.146442       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:12:33.151334       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 03:12:33.152099       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:12:33.155484       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:12:34.018088       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:12:34.184620       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:12:34.192209       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:12:34.198701       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:12:39.789722       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:12:39.918648       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:12:40.018846       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:12:40.022301       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1124 03:13:02.467837       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:36942: use of closed network connection
	
	
	==> kube-controller-manager [0fdd7c0e0f35d455ebb3ea829d4c03be6ac582959d8ef79b15467a24fbb4559b] <==
	I1124 03:12:39.011459       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:12:39.011479       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 03:12:39.011487       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 03:12:39.011497       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:12:39.011519       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:12:39.011516       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:12:39.011605       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:12:39.011795       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 03:12:39.012191       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 03:12:39.012317       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 03:12:39.012362       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:12:39.012372       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 03:12:39.012396       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 03:12:39.012533       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:12:39.012723       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:12:39.014937       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 03:12:39.014999       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 03:12:39.015036       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 03:12:39.015043       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 03:12:39.015047       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 03:12:39.017136       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:12:39.020063       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-284604" podCIDRs=["10.244.0.0/24"]
	I1124 03:12:39.020956       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:12:39.032426       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:12:53.966120       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [ce60d0b8ed8a256cadb0bcd706f3a9f86b0b840f30b74cc5711dbd5a322794ab] <==
	I1124 03:12:40.282246       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:12:40.354518       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:12:40.455102       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:12:40.455142       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 03:12:40.455220       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:12:40.473804       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:12:40.473854       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:12:40.479217       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:12:40.479607       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:12:40.479651       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:12:40.482662       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:12:40.482684       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:12:40.482721       1 config.go:200] "Starting service config controller"
	I1124 03:12:40.482723       1 config.go:309] "Starting node config controller"
	I1124 03:12:40.482735       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:12:40.482736       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:12:40.482741       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:12:40.482743       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:12:40.482727       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:12:40.583209       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:12:40.583218       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:12:40.583260       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [37a9f44f3cce52991c2f1b360fd5e092e59b2f323b1517ab495c35c58d09bfb2] <==
	E1124 03:12:31.707473       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:12:31.707999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:12:31.708126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:12:31.708132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:12:31.708241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:12:31.708368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:12:31.708472       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:12:31.708583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:12:31.708689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:12:31.708532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:12:31.708292       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:12:31.708698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:12:32.632050       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:12:32.682119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:12:32.685054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:12:32.699243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:12:32.727691       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:12:32.731030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:12:32.745291       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:12:32.770451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:12:32.798240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:12:32.848620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:12:32.878761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:12:32.994195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 03:12:35.301374       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:12:35 embed-certs-284604 kubelet[1293]: I1124 03:12:35.068423    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-284604" podStartSLOduration=1.068413658 podStartE2EDuration="1.068413658s" podCreationTimestamp="2025-11-24 03:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:12:35.06834944 +0000 UTC m=+1.132735955" watchObservedRunningTime="2025-11-24 03:12:35.068413658 +0000 UTC m=+1.132800173"
	Nov 24 03:12:35 embed-certs-284604 kubelet[1293]: I1124 03:12:35.077549    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-284604" podStartSLOduration=1.077531254 podStartE2EDuration="1.077531254s" podCreationTimestamp="2025-11-24 03:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:12:35.076806126 +0000 UTC m=+1.141192643" watchObservedRunningTime="2025-11-24 03:12:35.077531254 +0000 UTC m=+1.141917766"
	Nov 24 03:12:35 embed-certs-284604 kubelet[1293]: I1124 03:12:35.096473    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-284604" podStartSLOduration=1.09645251 podStartE2EDuration="1.09645251s" podCreationTimestamp="2025-11-24 03:12:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:12:35.086441922 +0000 UTC m=+1.150828437" watchObservedRunningTime="2025-11-24 03:12:35.09645251 +0000 UTC m=+1.160839027"
	Nov 24 03:12:39 embed-certs-284604 kubelet[1293]: I1124 03:12:39.037233    1293 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 03:12:39 embed-certs-284604 kubelet[1293]: I1124 03:12:39.037868    1293 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:12:39 embed-certs-284604 kubelet[1293]: I1124 03:12:39.938589    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/163b51f7-e8f5-47e0-9ea1-ca6d037db165-lib-modules\") pod \"kube-proxy-bn8fd\" (UID: \"163b51f7-e8f5-47e0-9ea1-ca6d037db165\") " pod="kube-system/kube-proxy-bn8fd"
	Nov 24 03:12:39 embed-certs-284604 kubelet[1293]: I1124 03:12:39.938753    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cnj5\" (UniqueName: \"kubernetes.io/projected/163b51f7-e8f5-47e0-9ea1-ca6d037db165-kube-api-access-5cnj5\") pod \"kube-proxy-bn8fd\" (UID: \"163b51f7-e8f5-47e0-9ea1-ca6d037db165\") " pod="kube-system/kube-proxy-bn8fd"
	Nov 24 03:12:39 embed-certs-284604 kubelet[1293]: I1124 03:12:39.938807    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45ltv\" (UniqueName: \"kubernetes.io/projected/903047e3-558b-41ce-a93d-9ed12844b7d3-kube-api-access-45ltv\") pod \"kindnet-7tbg8\" (UID: \"903047e3-558b-41ce-a93d-9ed12844b7d3\") " pod="kube-system/kindnet-7tbg8"
	Nov 24 03:12:39 embed-certs-284604 kubelet[1293]: I1124 03:12:39.938879    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/163b51f7-e8f5-47e0-9ea1-ca6d037db165-kube-proxy\") pod \"kube-proxy-bn8fd\" (UID: \"163b51f7-e8f5-47e0-9ea1-ca6d037db165\") " pod="kube-system/kube-proxy-bn8fd"
	Nov 24 03:12:39 embed-certs-284604 kubelet[1293]: I1124 03:12:39.938968    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/163b51f7-e8f5-47e0-9ea1-ca6d037db165-xtables-lock\") pod \"kube-proxy-bn8fd\" (UID: \"163b51f7-e8f5-47e0-9ea1-ca6d037db165\") " pod="kube-system/kube-proxy-bn8fd"
	Nov 24 03:12:39 embed-certs-284604 kubelet[1293]: I1124 03:12:39.938990    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/903047e3-558b-41ce-a93d-9ed12844b7d3-cni-cfg\") pod \"kindnet-7tbg8\" (UID: \"903047e3-558b-41ce-a93d-9ed12844b7d3\") " pod="kube-system/kindnet-7tbg8"
	Nov 24 03:12:39 embed-certs-284604 kubelet[1293]: I1124 03:12:39.939024    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/903047e3-558b-41ce-a93d-9ed12844b7d3-xtables-lock\") pod \"kindnet-7tbg8\" (UID: \"903047e3-558b-41ce-a93d-9ed12844b7d3\") " pod="kube-system/kindnet-7tbg8"
	Nov 24 03:12:39 embed-certs-284604 kubelet[1293]: I1124 03:12:39.939047    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/903047e3-558b-41ce-a93d-9ed12844b7d3-lib-modules\") pod \"kindnet-7tbg8\" (UID: \"903047e3-558b-41ce-a93d-9ed12844b7d3\") " pod="kube-system/kindnet-7tbg8"
	Nov 24 03:12:41 embed-certs-284604 kubelet[1293]: I1124 03:12:41.056847    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7tbg8" podStartSLOduration=2.056824573 podStartE2EDuration="2.056824573s" podCreationTimestamp="2025-11-24 03:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:12:41.056699918 +0000 UTC m=+7.121086435" watchObservedRunningTime="2025-11-24 03:12:41.056824573 +0000 UTC m=+7.121211089"
	Nov 24 03:12:42 embed-certs-284604 kubelet[1293]: I1124 03:12:42.506343    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bn8fd" podStartSLOduration=3.506319931 podStartE2EDuration="3.506319931s" podCreationTimestamp="2025-11-24 03:12:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:12:41.065404848 +0000 UTC m=+7.129791366" watchObservedRunningTime="2025-11-24 03:12:42.506319931 +0000 UTC m=+8.570706446"
	Nov 24 03:12:50 embed-certs-284604 kubelet[1293]: I1124 03:12:50.747672    1293 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:12:50 embed-certs-284604 kubelet[1293]: I1124 03:12:50.811409    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dff9f08-8110-4d3a-8505-4e3551179ae8-config-volume\") pod \"coredns-66bc5c9577-89mzc\" (UID: \"7dff9f08-8110-4d3a-8505-4e3551179ae8\") " pod="kube-system/coredns-66bc5c9577-89mzc"
	Nov 24 03:12:50 embed-certs-284604 kubelet[1293]: I1124 03:12:50.811474    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8rh2\" (UniqueName: \"kubernetes.io/projected/b51f7fd3-f53d-4099-9711-9fe1985b9868-kube-api-access-d8rh2\") pod \"storage-provisioner\" (UID: \"b51f7fd3-f53d-4099-9711-9fe1985b9868\") " pod="kube-system/storage-provisioner"
	Nov 24 03:12:50 embed-certs-284604 kubelet[1293]: I1124 03:12:50.811503    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b51f7fd3-f53d-4099-9711-9fe1985b9868-tmp\") pod \"storage-provisioner\" (UID: \"b51f7fd3-f53d-4099-9711-9fe1985b9868\") " pod="kube-system/storage-provisioner"
	Nov 24 03:12:50 embed-certs-284604 kubelet[1293]: I1124 03:12:50.811527    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp7qm\" (UniqueName: \"kubernetes.io/projected/7dff9f08-8110-4d3a-8505-4e3551179ae8-kube-api-access-cp7qm\") pod \"coredns-66bc5c9577-89mzc\" (UID: \"7dff9f08-8110-4d3a-8505-4e3551179ae8\") " pod="kube-system/coredns-66bc5c9577-89mzc"
	Nov 24 03:12:52 embed-certs-284604 kubelet[1293]: I1124 03:12:52.084360    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-89mzc" podStartSLOduration=12.084339457 podStartE2EDuration="12.084339457s" podCreationTimestamp="2025-11-24 03:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:12:52.08431122 +0000 UTC m=+18.148697736" watchObservedRunningTime="2025-11-24 03:12:52.084339457 +0000 UTC m=+18.148725973"
	Nov 24 03:12:54 embed-certs-284604 kubelet[1293]: I1124 03:12:54.381137    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.381107463 podStartE2EDuration="14.381107463s" podCreationTimestamp="2025-11-24 03:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:12:52.105944162 +0000 UTC m=+18.170330674" watchObservedRunningTime="2025-11-24 03:12:54.381107463 +0000 UTC m=+20.445493978"
	Nov 24 03:12:54 embed-certs-284604 kubelet[1293]: I1124 03:12:54.434067    1293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j7gz\" (UniqueName: \"kubernetes.io/projected/84f9c221-0f52-448e-88a0-6d2e90c436b2-kube-api-access-7j7gz\") pod \"busybox\" (UID: \"84f9c221-0f52-448e-88a0-6d2e90c436b2\") " pod="default/busybox"
	Nov 24 03:12:56 embed-certs-284604 kubelet[1293]: I1124 03:12:56.099998    1293 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.361669046 podStartE2EDuration="2.099974735s" podCreationTimestamp="2025-11-24 03:12:54 +0000 UTC" firstStartedPulling="2025-11-24 03:12:54.714511343 +0000 UTC m=+20.778897836" lastFinishedPulling="2025-11-24 03:12:55.452816833 +0000 UTC m=+21.517203525" observedRunningTime="2025-11-24 03:12:56.099705544 +0000 UTC m=+22.164092060" watchObservedRunningTime="2025-11-24 03:12:56.099974735 +0000 UTC m=+22.164361250"
	Nov 24 03:13:02 embed-certs-284604 kubelet[1293]: E1124 03:13:02.467754    1293 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56150->127.0.0.1:38309: write tcp 127.0.0.1:56150->127.0.0.1:38309: write: broken pipe
	
	
	==> storage-provisioner [4ff0beba7e76aa5b9e49f123d0b198e5c6515619b7483197447531e50da4c84e] <==
	I1124 03:12:51.140505       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:12:51.149654       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:12:51.149715       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:12:51.151936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:51.156877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:12:51.157060       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:12:51.157187       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-284604_05521bc8-3fdc-43b9-b257-7317916bc59b!
	I1124 03:12:51.157479       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49bb5acd-171e-4aa8-8356-6bac5deb0205", APIVersion:"v1", ResourceVersion:"407", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-284604_05521bc8-3fdc-43b9-b257-7317916bc59b became leader
	W1124 03:12:51.159194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:51.163179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:12:51.257468       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-284604_05521bc8-3fdc-43b9-b257-7317916bc59b!
	W1124 03:12:53.166293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:53.170056       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:55.173293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:55.178932       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:57.181511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:57.185420       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:59.188326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:59.193097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:01.196044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:01.199912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:03.203417       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:03.208956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-284604 -n embed-certs-284604
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-284604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-993813 --alsologtostderr -v=1
E1124 03:13:07.306562  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/auto-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-993813 --alsologtostderr -v=1: exit status 80 (2.252815085s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-993813 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:13:06.067832  669239 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:13:06.067947  669239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:06.067953  669239 out.go:374] Setting ErrFile to fd 2...
	I1124 03:13:06.067956  669239 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:06.068258  669239 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:13:06.068571  669239 out.go:368] Setting JSON to false
	I1124 03:13:06.068602  669239 mustload.go:66] Loading cluster: default-k8s-diff-port-993813
	I1124 03:13:06.068990  669239 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:13:06.069417  669239 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:13:06.089403  669239 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:13:06.089694  669239 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:06.151988  669239 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 03:13:06.141546217 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:13:06.152663  669239 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763935228-21975/minikube-v1.37.0-1763935228-21975-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763935228-21975-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-993813 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 03:13:06.154709  669239 out.go:179] * Pausing node default-k8s-diff-port-993813 ... 
	I1124 03:13:06.155758  669239 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:13:06.156051  669239 ssh_runner.go:195] Run: systemctl --version
	I1124 03:13:06.156106  669239 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:13:06.174453  669239 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:13:06.272930  669239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:13:06.294126  669239 pause.go:52] kubelet running: true
	I1124 03:13:06.294189  669239 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:13:06.455320  669239 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:13:06.455407  669239 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:13:06.518568  669239 cri.go:89] found id: "573f6a7cb37360f3f1fa23cf31377d96a60d7bd6c0e83b06a385f1fcb540e103"
	I1124 03:13:06.518591  669239 cri.go:89] found id: "4215d37d945b02ffa680f6a88a284357077e2085850453212142af5a50e8e540"
	I1124 03:13:06.518597  669239 cri.go:89] found id: "1bd7fbd7ac7308bdb9bfcef37d44d50f647796051adcb416cecd8027eff0b98e"
	I1124 03:13:06.518601  669239 cri.go:89] found id: "e9aedcc7b2f459c0aa678060a0430af50f95c9ae8cc09573789ea82fcb7fafac"
	I1124 03:13:06.518605  669239 cri.go:89] found id: "98b77ba6e3b6b9a9bb0fd551092cc96efbc1de2ae458e7b1cda2d0aa23b17186"
	I1124 03:13:06.518610  669239 cri.go:89] found id: "9d08a55f25f2dea3825f91aa9365cd9a3093b4582eda50f03b22c2ecc00f92c6"
	I1124 03:13:06.518615  669239 cri.go:89] found id: "a7d5f73dd018d0ee98a3ef524ea2c83739dc8c07b34a7298ffb1d288db659329"
	I1124 03:13:06.518619  669239 cri.go:89] found id: "dd990c6cdcef7e1e7305b9fc20b7615dfb761cbe8c5d42a1f61c8b41406cd0a7"
	I1124 03:13:06.518623  669239 cri.go:89] found id: "11357ba44da7473554ad2b8f1e58b742de06b9155b164b6c83a5d2f9beb7830e"
	I1124 03:13:06.518642  669239 cri.go:89] found id: "ca56ee1046dfd67567192ecb6131590bdde60902b351bebce38f90604b67c2ba"
	I1124 03:13:06.518651  669239 cri.go:89] found id: "93cf9607a612fd45cf69895841118ca18e88cd31bd1ae578c8b2d22db2c14cad"
	I1124 03:13:06.518656  669239 cri.go:89] found id: ""
	I1124 03:13:06.518713  669239 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:13:06.530648  669239 retry.go:31] will retry after 181.611761ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:13:06Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:13:06.713065  669239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:13:06.725634  669239 pause.go:52] kubelet running: false
	I1124 03:13:06.725689  669239 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:13:06.863708  669239 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:13:06.863792  669239 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:13:06.924814  669239 cri.go:89] found id: "573f6a7cb37360f3f1fa23cf31377d96a60d7bd6c0e83b06a385f1fcb540e103"
	I1124 03:13:06.924842  669239 cri.go:89] found id: "4215d37d945b02ffa680f6a88a284357077e2085850453212142af5a50e8e540"
	I1124 03:13:06.924849  669239 cri.go:89] found id: "1bd7fbd7ac7308bdb9bfcef37d44d50f647796051adcb416cecd8027eff0b98e"
	I1124 03:13:06.924854  669239 cri.go:89] found id: "e9aedcc7b2f459c0aa678060a0430af50f95c9ae8cc09573789ea82fcb7fafac"
	I1124 03:13:06.924860  669239 cri.go:89] found id: "98b77ba6e3b6b9a9bb0fd551092cc96efbc1de2ae458e7b1cda2d0aa23b17186"
	I1124 03:13:06.924865  669239 cri.go:89] found id: "9d08a55f25f2dea3825f91aa9365cd9a3093b4582eda50f03b22c2ecc00f92c6"
	I1124 03:13:06.924870  669239 cri.go:89] found id: "a7d5f73dd018d0ee98a3ef524ea2c83739dc8c07b34a7298ffb1d288db659329"
	I1124 03:13:06.924874  669239 cri.go:89] found id: "dd990c6cdcef7e1e7305b9fc20b7615dfb761cbe8c5d42a1f61c8b41406cd0a7"
	I1124 03:13:06.924879  669239 cri.go:89] found id: "11357ba44da7473554ad2b8f1e58b742de06b9155b164b6c83a5d2f9beb7830e"
	I1124 03:13:06.924902  669239 cri.go:89] found id: "ca56ee1046dfd67567192ecb6131590bdde60902b351bebce38f90604b67c2ba"
	I1124 03:13:06.924911  669239 cri.go:89] found id: "93cf9607a612fd45cf69895841118ca18e88cd31bd1ae578c8b2d22db2c14cad"
	I1124 03:13:06.924915  669239 cri.go:89] found id: ""
	I1124 03:13:06.924961  669239 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:13:06.936312  669239 retry.go:31] will retry after 513.1508ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:13:06Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:13:07.449664  669239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:13:07.462027  669239 pause.go:52] kubelet running: false
	I1124 03:13:07.462099  669239 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:13:07.608966  669239 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:13:07.609042  669239 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:13:07.671742  669239 cri.go:89] found id: "573f6a7cb37360f3f1fa23cf31377d96a60d7bd6c0e83b06a385f1fcb540e103"
	I1124 03:13:07.671764  669239 cri.go:89] found id: "4215d37d945b02ffa680f6a88a284357077e2085850453212142af5a50e8e540"
	I1124 03:13:07.671769  669239 cri.go:89] found id: "1bd7fbd7ac7308bdb9bfcef37d44d50f647796051adcb416cecd8027eff0b98e"
	I1124 03:13:07.671782  669239 cri.go:89] found id: "e9aedcc7b2f459c0aa678060a0430af50f95c9ae8cc09573789ea82fcb7fafac"
	I1124 03:13:07.671785  669239 cri.go:89] found id: "98b77ba6e3b6b9a9bb0fd551092cc96efbc1de2ae458e7b1cda2d0aa23b17186"
	I1124 03:13:07.671789  669239 cri.go:89] found id: "9d08a55f25f2dea3825f91aa9365cd9a3093b4582eda50f03b22c2ecc00f92c6"
	I1124 03:13:07.671793  669239 cri.go:89] found id: "a7d5f73dd018d0ee98a3ef524ea2c83739dc8c07b34a7298ffb1d288db659329"
	I1124 03:13:07.671797  669239 cri.go:89] found id: "dd990c6cdcef7e1e7305b9fc20b7615dfb761cbe8c5d42a1f61c8b41406cd0a7"
	I1124 03:13:07.671802  669239 cri.go:89] found id: "11357ba44da7473554ad2b8f1e58b742de06b9155b164b6c83a5d2f9beb7830e"
	I1124 03:13:07.671810  669239 cri.go:89] found id: "ca56ee1046dfd67567192ecb6131590bdde60902b351bebce38f90604b67c2ba"
	I1124 03:13:07.671818  669239 cri.go:89] found id: "93cf9607a612fd45cf69895841118ca18e88cd31bd1ae578c8b2d22db2c14cad"
	I1124 03:13:07.671823  669239 cri.go:89] found id: ""
	I1124 03:13:07.671864  669239 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:13:07.683214  669239 retry.go:31] will retry after 330.127061ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:13:07Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:13:08.013711  669239 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:13:08.026672  669239 pause.go:52] kubelet running: false
	I1124 03:13:08.026722  669239 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:13:08.166349  669239 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:13:08.166422  669239 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:13:08.231414  669239 cri.go:89] found id: "573f6a7cb37360f3f1fa23cf31377d96a60d7bd6c0e83b06a385f1fcb540e103"
	I1124 03:13:08.231440  669239 cri.go:89] found id: "4215d37d945b02ffa680f6a88a284357077e2085850453212142af5a50e8e540"
	I1124 03:13:08.231447  669239 cri.go:89] found id: "1bd7fbd7ac7308bdb9bfcef37d44d50f647796051adcb416cecd8027eff0b98e"
	I1124 03:13:08.231453  669239 cri.go:89] found id: "e9aedcc7b2f459c0aa678060a0430af50f95c9ae8cc09573789ea82fcb7fafac"
	I1124 03:13:08.231457  669239 cri.go:89] found id: "98b77ba6e3b6b9a9bb0fd551092cc96efbc1de2ae458e7b1cda2d0aa23b17186"
	I1124 03:13:08.231472  669239 cri.go:89] found id: "9d08a55f25f2dea3825f91aa9365cd9a3093b4582eda50f03b22c2ecc00f92c6"
	I1124 03:13:08.231475  669239 cri.go:89] found id: "a7d5f73dd018d0ee98a3ef524ea2c83739dc8c07b34a7298ffb1d288db659329"
	I1124 03:13:08.231483  669239 cri.go:89] found id: "dd990c6cdcef7e1e7305b9fc20b7615dfb761cbe8c5d42a1f61c8b41406cd0a7"
	I1124 03:13:08.231486  669239 cri.go:89] found id: "11357ba44da7473554ad2b8f1e58b742de06b9155b164b6c83a5d2f9beb7830e"
	I1124 03:13:08.231496  669239 cri.go:89] found id: "ca56ee1046dfd67567192ecb6131590bdde60902b351bebce38f90604b67c2ba"
	I1124 03:13:08.231503  669239 cri.go:89] found id: "93cf9607a612fd45cf69895841118ca18e88cd31bd1ae578c8b2d22db2c14cad"
	I1124 03:13:08.231505  669239 cri.go:89] found id: ""
	I1124 03:13:08.231553  669239 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:13:08.244942  669239 out.go:203] 
	W1124 03:13:08.246089  669239 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:13:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:13:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:13:08.246110  669239 out.go:285] * 
	* 
	W1124 03:13:08.250616  669239 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:13:08.251786  669239 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-993813 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-993813
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-993813:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8",
	        "Created": "2025-11-24T03:10:55.916288058Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 656843,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:12:05.040714034Z",
	            "FinishedAt": "2025-11-24T03:12:04.193532321Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8/hosts",
	        "LogPath": "/var/lib/docker/containers/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8-json.log",
	        "Name": "/default-k8s-diff-port-993813",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-993813:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-993813",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8",
	                "LowerDir": "/var/lib/docker/overlay2/6fe49d723a85944578f35e13638f43b6277cc82d6ac33569536577f7d90c4edd-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6fe49d723a85944578f35e13638f43b6277cc82d6ac33569536577f7d90c4edd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6fe49d723a85944578f35e13638f43b6277cc82d6ac33569536577f7d90c4edd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6fe49d723a85944578f35e13638f43b6277cc82d6ac33569536577f7d90c4edd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-993813",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-993813/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-993813",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-993813",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-993813",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "07e27320d0bcf192a08231c130bc772c75cc476c063f5b8b8867087b38a27191",
	            "SandboxKey": "/var/run/docker/netns/07e27320d0bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-993813": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "50b2e4e61586f7fb59c4f56c2607ad50e6dc9faf4b2e274df27c397b878fe391",
	                    "EndpointID": "8bec0b259cf3bdbfbcf94795f0c484c0f8c8b83f2d759caefe6aa476c44ed74b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "fa:4b:98:39:1c:ec",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-993813",
	                        "b38aecdd5f9d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993813 -n default-k8s-diff-port-993813
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993813 -n default-k8s-diff-port-993813: exit status 2 (316.851621ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-993813 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-993813 logs -n 25: (1.115212909s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-603010 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-993813 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ stop    │ -p no-preload-603010 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-579951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p old-k8s-version-579951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-438041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ image   │ newest-cni-438041 image list --format=json                                                                                                                                                                                                    │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p newest-cni-438041 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p no-preload-603010 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                                                                                          │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p no-preload-603010 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                                                                                          │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p disable-driver-mounts-242597                                                                                                                                                                                                               │ disable-driver-mounts-242597 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ image   │ old-k8s-version-579951 image list --format=json                                                                                                                                                                                               │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p old-k8s-version-579951 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ -p old-k8s-version-579951                                                                                                                                                                                                                     │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p old-k8s-version-579951                                                                                                                                                                                                                     │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable metrics-server -p embed-certs-284604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ stop    │ -p embed-certs-284604 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ image   │ default-k8s-diff-port-993813 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ pause   │ -p default-k8s-diff-port-993813 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:12:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:12:09.055015  658811 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:12:09.055230  658811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:09.055247  658811 out.go:374] Setting ErrFile to fd 2...
	I1124 03:12:09.055253  658811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:09.055468  658811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:12:09.055909  658811 out.go:368] Setting JSON to false
	I1124 03:12:09.056956  658811 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6876,"bootTime":1763947053,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:12:09.057009  658811 start.go:143] virtualization: kvm guest
	I1124 03:12:09.058671  658811 out.go:179] * [embed-certs-284604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:12:09.059850  658811 notify.go:221] Checking for updates...
	I1124 03:12:09.059855  658811 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:12:09.061128  658811 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:12:09.062317  658811 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:09.063358  658811 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:12:09.064255  658811 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:12:09.065078  658811 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:12:09.066407  658811 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.066509  658811 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.066589  658811 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:12:09.066666  658811 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:12:09.089713  658811 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:12:09.089855  658811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:09.145948  658811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:12:09.135562124 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:09.146071  658811 docker.go:319] overlay module found
	I1124 03:12:09.147708  658811 out.go:179] * Using the docker driver based on user configuration
	I1124 03:12:09.148714  658811 start.go:309] selected driver: docker
	I1124 03:12:09.148737  658811 start.go:927] validating driver "docker" against <nil>
	I1124 03:12:09.148747  658811 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:12:09.149338  658811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:09.210343  658811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:12:09.200351707 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:09.210534  658811 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:12:09.210794  658811 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:09.212381  658811 out.go:179] * Using Docker driver with root privileges
	I1124 03:12:09.213398  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:09.213482  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:09.213497  658811 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:12:09.213574  658811 start.go:353] cluster config:
	{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:09.214730  658811 out.go:179] * Starting "embed-certs-284604" primary control-plane node in "embed-certs-284604" cluster
	I1124 03:12:09.215613  658811 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:12:09.216663  658811 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:12:09.217654  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:09.217694  658811 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:12:09.217703  658811 cache.go:65] Caching tarball of preloaded images
	I1124 03:12:09.217732  658811 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:12:09.217791  658811 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:12:09.217808  658811 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:12:09.217977  658811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:12:09.218021  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json: {Name:mkd4898576ebe0ebf6d2ca35fddd33eac8f127df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:09.239944  658811 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:12:09.239962  658811 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:12:09.239976  658811 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:12:09.240004  658811 start.go:360] acquireMachinesLock for embed-certs-284604: {Name:mkd39be5908e1d289ed5af40b6c2b1c510beffd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:12:09.240088  658811 start.go:364] duration metric: took 68.665µs to acquireMachinesLock for "embed-certs-284604"
	I1124 03:12:09.240109  658811 start.go:93] Provisioning new machine with config: &{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:09.240182  658811 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:12:05.014758  656542 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-993813" ...
	I1124 03:12:05.014805  656542 cli_runner.go:164] Run: docker start default-k8s-diff-port-993813
	I1124 03:12:05.297424  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:05.316835  656542 kic.go:430] container "default-k8s-diff-port-993813" state is running.
	I1124 03:12:05.317309  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:05.336690  656542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/config.json ...
	I1124 03:12:05.336923  656542 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:05.336992  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:05.356564  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:05.356863  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:05.356907  656542 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:05.357642  656542 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39256->127.0.0.1:33488: read: connection reset by peer
	I1124 03:12:08.497704  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:12:08.497744  656542 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-993813"
	I1124 03:12:08.497799  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:08.516284  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:08.516620  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:08.516642  656542 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993813 && echo "default-k8s-diff-port-993813" | sudo tee /etc/hostname
	I1124 03:12:08.664299  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:12:08.664399  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:08.683215  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:08.683424  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:08.683440  656542 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993813/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:08.824495  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:08.824534  656542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:08.824571  656542 ubuntu.go:190] setting up certificates
	I1124 03:12:08.824597  656542 provision.go:84] configureAuth start
	I1124 03:12:08.824659  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:08.842592  656542 provision.go:143] copyHostCerts
	I1124 03:12:08.842639  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:08.842651  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:08.842701  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:08.842805  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:08.842813  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:08.842838  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:08.842940  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:08.842950  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:08.842981  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:08.843051  656542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993813 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-993813 localhost minikube]
	I1124 03:12:08.993088  656542 provision.go:177] copyRemoteCerts
	I1124 03:12:08.993141  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:08.993180  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.010481  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.112610  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:09.134182  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 03:12:09.153393  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:12:09.173516  656542 provision.go:87] duration metric: took 348.902104ms to configureAuth
	I1124 03:12:09.173547  656542 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:09.173717  656542 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.173820  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.195519  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:09.195738  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:09.195756  656542 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:09.551404  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:09.551434  656542 machine.go:97] duration metric: took 4.214494542s to provisionDockerMachine
	I1124 03:12:09.551449  656542 start.go:293] postStartSetup for "default-k8s-diff-port-993813" (driver="docker")
	I1124 03:12:09.551463  656542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:09.551533  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:09.551574  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.572440  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.684044  656542 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:09.688328  656542 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:09.688354  656542 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:09.688365  656542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:09.688414  656542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:09.688488  656542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:09.688660  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:09.696023  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:09.725715  656542 start.go:296] duration metric: took 174.248037ms for postStartSetup
	I1124 03:12:09.725795  656542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:09.725851  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.747235  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:06.610202  657716 out.go:252] * Restarting existing docker container for "no-preload-603010" ...
	I1124 03:12:06.610267  657716 cli_runner.go:164] Run: docker start no-preload-603010
	I1124 03:12:06.895418  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:06.913279  657716 kic.go:430] container "no-preload-603010" state is running.
	I1124 03:12:06.913694  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:06.931543  657716 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/config.json ...
	I1124 03:12:06.931779  657716 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:06.931840  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:06.949180  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:06.949422  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:06.949436  657716 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:06.950106  657716 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53738->127.0.0.1:33493: read: connection reset by peer
	I1124 03:12:10.094410  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-603010
	
	I1124 03:12:10.094455  657716 ubuntu.go:182] provisioning hostname "no-preload-603010"
	I1124 03:12:10.094548  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.117277  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.117614  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.117637  657716 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-603010 && echo "no-preload-603010" | sudo tee /etc/hostname
	I1124 03:12:10.272082  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-603010
	
	I1124 03:12:10.272162  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.293197  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.293525  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.293557  657716 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-603010' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-603010/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-603010' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:10.440289  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:10.440322  657716 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:10.440350  657716 ubuntu.go:190] setting up certificates
	I1124 03:12:10.440374  657716 provision.go:84] configureAuth start
	I1124 03:12:10.440443  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:10.458672  657716 provision.go:143] copyHostCerts
	I1124 03:12:10.458743  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:10.458766  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:10.458857  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:10.459021  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:10.459037  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:10.459080  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:10.459183  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:10.459195  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:10.459232  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:10.459323  657716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.no-preload-603010 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-603010]
	I1124 03:12:10.546420  657716 provision.go:177] copyRemoteCerts
	I1124 03:12:10.546503  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:10.546552  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.564799  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:10.669343  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:10.687953  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:12:10.707320  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:12:10.728398  657716 provision.go:87] duration metric: took 288.002675ms to configureAuth
	I1124 03:12:10.728450  657716 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:10.728791  657716 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:10.728992  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.754544  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.754857  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.754907  657716 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:09.846210  656542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:09.851045  656542 fix.go:56] duration metric: took 4.853815531s for fixHost
	I1124 03:12:09.851067  656542 start.go:83] releasing machines lock for "default-k8s-diff-port-993813", held for 4.853861223s
	I1124 03:12:09.851139  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:09.871679  656542 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:09.871744  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.871767  656542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:09.871859  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.897665  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.897832  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.996390  656542 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:10.070447  656542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:10.108350  656542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:10.113659  656542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:10.113732  656542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:10.122258  656542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:12:10.122274  656542 start.go:496] detecting cgroup driver to use...
	I1124 03:12:10.122301  656542 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:10.122333  656542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:10.138420  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:10.151623  656542 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:10.151696  656542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:10.169717  656542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:10.185403  656542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:10.268937  656542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:10.361626  656542 docker.go:234] disabling docker service ...
	I1124 03:12:10.361713  656542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:10.376259  656542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:10.389709  656542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:10.493317  656542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:10.581163  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:10.594309  656542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:10.608489  656542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:10.608559  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.618090  656542 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:10.618147  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.629142  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.639755  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.648289  656542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:10.657390  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.667835  656542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.677148  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.686554  656542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:10.694262  656542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:10.701983  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:10.784645  656542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:13.176259  656542 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.391580237s)
	I1124 03:12:13.176297  656542 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:13.176344  656542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:13.182771  656542 start.go:564] Will wait 60s for crictl version
	I1124 03:12:13.182920  656542 ssh_runner.go:195] Run: which crictl
	I1124 03:12:13.188282  656542 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:13.221129  656542 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:13.221208  656542 ssh_runner.go:195] Run: crio --version
	I1124 03:12:13.256022  656542 ssh_runner.go:195] Run: crio --version
	I1124 03:12:13.289098  656542 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1124 03:12:09.667322  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:11.810684  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:09.241811  658811 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:12:09.242074  658811 start.go:159] libmachine.API.Create for "embed-certs-284604" (driver="docker")
	I1124 03:12:09.242107  658811 client.go:173] LocalClient.Create starting
	I1124 03:12:09.242186  658811 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 03:12:09.242224  658811 main.go:143] libmachine: Decoding PEM data...
	I1124 03:12:09.242246  658811 main.go:143] libmachine: Parsing certificate...
	I1124 03:12:09.242326  658811 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 03:12:09.242354  658811 main.go:143] libmachine: Decoding PEM data...
	I1124 03:12:09.242374  658811 main.go:143] libmachine: Parsing certificate...
	I1124 03:12:09.242824  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:12:09.259427  658811 cli_runner.go:211] docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:12:09.259477  658811 network_create.go:284] running [docker network inspect embed-certs-284604] to gather additional debugging logs...
	I1124 03:12:09.259492  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604
	W1124 03:12:09.275004  658811 cli_runner.go:211] docker network inspect embed-certs-284604 returned with exit code 1
	I1124 03:12:09.275029  658811 network_create.go:287] error running [docker network inspect embed-certs-284604]: docker network inspect embed-certs-284604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-284604 not found
	I1124 03:12:09.275039  658811 network_create.go:289] output of [docker network inspect embed-certs-284604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-284604 not found
	
	** /stderr **
	I1124 03:12:09.275132  658811 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:09.292074  658811 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-58688175ab6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:9d:6a:fc:5c:13} reservation:<nil>}
	I1124 03:12:09.292745  658811 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf033568456f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:35:42:16:23:28} reservation:<nil>}
	I1124 03:12:09.293207  658811 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ecb12099844 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ce:07:19:c6:91:6e} reservation:<nil>}
	I1124 03:12:09.293801  658811 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-50b2e4e61586 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:0d:88:19:7d:df} reservation:<nil>}
	I1124 03:12:09.294406  658811 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6fb41680caed IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:a0:6c:44:95:b2} reservation:<nil>}
	I1124 03:12:09.295273  658811 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eef7f0}
	I1124 03:12:09.295296  658811 network_create.go:124] attempt to create docker network embed-certs-284604 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 03:12:09.295333  658811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-284604 embed-certs-284604
	I1124 03:12:09.341016  658811 network_create.go:108] docker network embed-certs-284604 192.168.94.0/24 created
	I1124 03:12:09.341044  658811 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-284604" container
	I1124 03:12:09.341097  658811 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:12:09.358710  658811 cli_runner.go:164] Run: docker volume create embed-certs-284604 --label name.minikube.sigs.k8s.io=embed-certs-284604 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:12:09.377491  658811 oci.go:103] Successfully created a docker volume embed-certs-284604
	I1124 03:12:09.377565  658811 cli_runner.go:164] Run: docker run --rm --name embed-certs-284604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-284604 --entrypoint /usr/bin/test -v embed-certs-284604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:12:09.757637  658811 oci.go:107] Successfully prepared a docker volume embed-certs-284604
	I1124 03:12:09.757726  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:09.757742  658811 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:12:09.757816  658811 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-284604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:12:13.055592  658811 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-284604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (3.297719307s)
	I1124 03:12:13.055632  658811 kic.go:203] duration metric: took 3.29788472s to extract preloaded images to volume ...
	W1124 03:12:13.055721  658811 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:12:13.055758  658811 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:12:13.055810  658811 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:12:13.124836  658811 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-284604 --name embed-certs-284604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-284604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-284604 --network embed-certs-284604 --ip 192.168.94.2 --volume embed-certs-284604:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:12:13.468642  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Running}}
	I1124 03:12:13.493010  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.520114  658811 cli_runner.go:164] Run: docker exec embed-certs-284604 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:12:13.579438  658811 oci.go:144] the created container "embed-certs-284604" has a running status.
	I1124 03:12:13.579473  658811 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa...
	I1124 03:12:13.686392  658811 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:12:13.719014  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.744934  658811 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:12:13.744979  658811 kic_runner.go:114] Args: [docker exec --privileged embed-certs-284604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:12:13.804379  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.833184  658811 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:13.833391  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:13.865266  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:13.865635  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:13.865670  658811 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:13.866448  658811 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55158->127.0.0.1:33498: read: connection reset by peer
	I1124 03:12:13.290552  656542 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:13.314170  656542 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:13.318716  656542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:13.333300  656542 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:13.333436  656542 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:13.333523  656542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:13.375001  656542 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:13.375027  656542 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:12:13.375078  656542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:13.407152  656542 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:13.407180  656542 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:13.407190  656542 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 03:12:13.407342  656542 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:13.407444  656542 ssh_runner.go:195] Run: crio config
	I1124 03:12:13.468159  656542 cni.go:84] Creating CNI manager for ""
	I1124 03:12:13.468191  656542 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:13.468220  656542 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:13.468251  656542 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993813 NodeName:default-k8s-diff-port-993813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:13.468425  656542 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993813"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:13.468485  656542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:13.480922  656542 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:13.480989  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:13.491437  656542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 03:12:13.510538  656542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:13.531599  656542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 03:12:13.550625  656542 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:13.557123  656542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:13.570105  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:13.687069  656542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:13.711246  656542 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813 for IP: 192.168.76.2
	I1124 03:12:13.711268  656542 certs.go:195] generating shared ca certs ...
	I1124 03:12:13.711287  656542 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:13.711456  656542 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:13.711513  656542 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:13.711526  656542 certs.go:257] generating profile certs ...
	I1124 03:12:13.711642  656542 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key
	I1124 03:12:13.711706  656542 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619
	I1124 03:12:13.711753  656542 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key
	I1124 03:12:13.711996  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:13.712051  656542 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:13.712065  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:13.712101  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:13.712139  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:13.712175  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:13.712240  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:13.712851  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:13.744604  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:13.773924  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:13.797454  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:13.831783  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 03:12:13.870484  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:13.900124  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:13.922822  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:12:13.948171  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:13.977351  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:14.003032  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:14.029032  656542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:14.044929  656542 ssh_runner.go:195] Run: openssl version
	I1124 03:12:14.055102  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:14.069569  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.074149  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.074206  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.129455  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:14.139467  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:14.150460  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.155547  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.155598  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.213122  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:14.224488  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:14.235043  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.239741  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.239796  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.296275  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:14.307247  656542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:14.315784  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:12:14.374911  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:12:14.452037  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:12:14.514532  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:12:14.577046  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:12:14.634822  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:12:14.697600  656542 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:14.697704  656542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:14.697759  656542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:14.736428  656542 cri.go:89] found id: "9d08a55f25f2dea3825f91aa9365cd9a3093b4582eda50f03b22c2ecc00f92c6"
	I1124 03:12:14.736451  656542 cri.go:89] found id: "a7d5f73dd018d0ee98a3ef524ea2c83739dc8c07b34a7298ffb1d288db659329"
	I1124 03:12:14.736458  656542 cri.go:89] found id: "dd990c6cdcef7e1e7305b9fc20b7615dfb761cbe8c5d42a1f61c8b41406cd0a7"
	I1124 03:12:14.736462  656542 cri.go:89] found id: "11357ba44da7473554ad2b8f1e58b742de06b9155b164b6c83a5d2f9beb7830e"
	I1124 03:12:14.736466  656542 cri.go:89] found id: ""
	I1124 03:12:14.736511  656542 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:12:14.754070  656542 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:14Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:14.754156  656542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:14.765200  656542 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:12:14.765224  656542 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:12:14.765273  656542 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:12:14.773243  656542 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:12:14.773947  656542 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993813" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:14.774328  656542 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993813" cluster setting kubeconfig missing "default-k8s-diff-port-993813" context setting]
	I1124 03:12:14.774925  656542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.776519  656542 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:12:14.785657  656542 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 03:12:14.785687  656542 kubeadm.go:602] duration metric: took 20.455875ms to restartPrimaryControlPlane
	I1124 03:12:14.785704  656542 kubeadm.go:403] duration metric: took 88.114399ms to StartCluster
	I1124 03:12:14.785722  656542 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.785796  656542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:14.786941  656542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.787180  656542 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:14.787429  656542 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:14.787487  656542 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:14.787568  656542 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.787584  656542 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.787592  656542 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:12:14.787615  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.788183  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.788464  656542 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.788516  656542 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993813"
	I1124 03:12:14.788466  656542 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.788738  656542 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.788750  656542 addons.go:248] addon dashboard should already be in state true
	I1124 03:12:14.788782  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.789431  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.789731  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.792034  656542 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:14.793166  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:14.820828  656542 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:12:14.821632  656542 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.821655  656542 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:12:14.821731  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.821909  656542 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:12:14.822084  656542 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:14.822112  656542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:14.822188  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.822548  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.827335  656542 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:12:13.173638  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:13.173665  657716 machine.go:97] duration metric: took 6.241868553s to provisionDockerMachine
	I1124 03:12:13.173679  657716 start.go:293] postStartSetup for "no-preload-603010" (driver="docker")
	I1124 03:12:13.173692  657716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:13.173754  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:13.173803  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.199819  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.311414  657716 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:13.316263  657716 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:13.316292  657716 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:13.316304  657716 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:13.316362  657716 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:13.316451  657716 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:13.316564  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:13.330333  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:13.349678  657716 start.go:296] duration metric: took 175.98281ms for postStartSetup
	I1124 03:12:13.349757  657716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:13.349803  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.372668  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.477580  657716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:13.483572  657716 fix.go:56] duration metric: took 6.891356705s for fixHost
	I1124 03:12:13.483602  657716 start.go:83] releasing machines lock for "no-preload-603010", held for 6.891418388s
	I1124 03:12:13.483679  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:13.509057  657716 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:13.509123  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.509169  657716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:13.509281  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.533830  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.535423  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.716640  657716 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:13.727633  657716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:13.784701  657716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:13.789877  657716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:13.789964  657716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:13.799956  657716 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:12:13.799989  657716 start.go:496] detecting cgroup driver to use...
	I1124 03:12:13.800021  657716 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:13.800080  657716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:13.821650  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:13.845364  657716 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:13.845437  657716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:13.876223  657716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:13.896810  657716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:14.018144  657716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:14.133192  657716 docker.go:234] disabling docker service ...
	I1124 03:12:14.133276  657716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:14.151812  657716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:14.167561  657716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:14.282838  657716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:14.401610  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:14.417930  657716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:14.437107  657716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:14.437170  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.449631  657716 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:14.449698  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.462463  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.477641  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.490417  657716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:14.504273  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.516484  657716 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.526509  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.538280  657716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:14.546998  657716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:14.555574  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:14.685636  657716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:14.944749  657716 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:14.944917  657716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:14.950036  657716 start.go:564] Will wait 60s for crictl version
	I1124 03:12:14.950115  657716 ssh_runner.go:195] Run: which crictl
	I1124 03:12:14.954328  657716 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:14.985292  657716 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:14.985374  657716 ssh_runner.go:195] Run: crio --version
	I1124 03:12:15.030503  657716 ssh_runner.go:195] Run: crio --version
	I1124 03:12:15.075694  657716 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:12:15.076822  657716 cli_runner.go:164] Run: docker network inspect no-preload-603010 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:15.102488  657716 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:15.108702  657716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:15.124431  657716 kubeadm.go:884] updating cluster {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:15.124588  657716 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:15.124636  657716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:15.167486  657716 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:15.167521  657716 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:15.167539  657716 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 03:12:15.167821  657716 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-603010 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:15.167925  657716 ssh_runner.go:195] Run: crio config
	I1124 03:12:15.235069  657716 cni.go:84] Creating CNI manager for ""
	I1124 03:12:15.235092  657716 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:15.235110  657716 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:15.235137  657716 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603010 NodeName:no-preload-603010 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:15.235315  657716 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603010"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:15.235402  657716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:15.246426  657716 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:15.246486  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:15.255073  657716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:12:15.274174  657716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:15.291964  657716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1124 03:12:15.310704  657716 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:15.315241  657716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:15.329049  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:15.444004  657716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:15.468249  657716 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010 for IP: 192.168.85.2
	I1124 03:12:15.468275  657716 certs.go:195] generating shared ca certs ...
	I1124 03:12:15.468303  657716 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:15.468461  657716 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:15.468527  657716 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:15.468545  657716 certs.go:257] generating profile certs ...
	I1124 03:12:15.468671  657716 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key
	I1124 03:12:15.468756  657716 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738
	I1124 03:12:15.468820  657716 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key
	I1124 03:12:15.469056  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:15.469155  657716 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:15.469190  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:15.469235  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:15.469307  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:15.469360  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:15.469452  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:15.470423  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:15.492954  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:15.516840  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:15.539720  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:15.572434  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:12:15.602383  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:15.627969  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:15.650700  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:12:15.671263  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:15.692710  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:15.715510  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:15.740163  657716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:15.756242  657716 ssh_runner.go:195] Run: openssl version
	I1124 03:12:15.764455  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:15.774930  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.779615  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.779675  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.837760  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:15.848860  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:15.859402  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.864242  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.864304  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.923088  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:15.933908  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:15.944242  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:15.949198  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:15.949248  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:16.007273  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:16.018117  657716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:16.023108  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:12:16.086212  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:12:16.144287  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:12:16.203439  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:12:16.267980  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:12:16.329154  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:12:16.391972  657716 kubeadm.go:401] StartCluster: {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:16.392083  657716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:16.392153  657716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:16.431895  657716 cri.go:89] found id: "3dc1c0625a30c329549c40b7e0c43d1ab4d818ccc0fb85ea668066af23a327d3"
	I1124 03:12:16.431924  657716 cri.go:89] found id: "4e8e84f339bed2139577a34b2e6715ab1a8d9a3b425a16886245d61781115cf0"
	I1124 03:12:16.431930  657716 cri.go:89] found id: "7294ac6825ca8afcd936803374aa461bcb4d637ea8743600784037c5acb225b2"
	I1124 03:12:16.431934  657716 cri.go:89] found id: "767a9908e75937581bb6f9fd527e760a401658cde3d3cf28bc2b66613eab1027"
	I1124 03:12:16.431938  657716 cri.go:89] found id: ""
	I1124 03:12:16.431989  657716 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:12:16.448469  657716 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:16Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:16.448636  657716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:16.460046  657716 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:12:16.460066  657716 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:12:16.460159  657716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:12:16.470578  657716 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:12:16.472039  657716 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-603010" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:16.472691  657716 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-603010" cluster setting kubeconfig missing "no-preload-603010" context setting]
	I1124 03:12:16.473827  657716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.476388  657716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:12:16.491280  657716 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 03:12:16.491307  657716 kubeadm.go:602] duration metric: took 31.234841ms to restartPrimaryControlPlane
	I1124 03:12:16.491317  657716 kubeadm.go:403] duration metric: took 99.357197ms to StartCluster
	I1124 03:12:16.491333  657716 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.491393  657716 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:16.492731  657716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.492990  657716 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:16.493291  657716 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:16.493352  657716 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:16.493441  657716 addons.go:70] Setting storage-provisioner=true in profile "no-preload-603010"
	I1124 03:12:16.493465  657716 addons.go:239] Setting addon storage-provisioner=true in "no-preload-603010"
	W1124 03:12:16.493473  657716 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:12:16.493503  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.494027  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.494266  657716 addons.go:70] Setting dashboard=true in profile "no-preload-603010"
	I1124 03:12:16.494322  657716 addons.go:239] Setting addon dashboard=true in "no-preload-603010"
	I1124 03:12:16.494338  657716 addons.go:70] Setting default-storageclass=true in profile "no-preload-603010"
	I1124 03:12:16.494434  657716 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603010"
	W1124 03:12:16.494361  657716 addons.go:248] addon dashboard should already be in state true
	I1124 03:12:16.494570  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.494863  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.495005  657716 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:16.495647  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.496468  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:16.527269  657716 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:12:16.528480  657716 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:12:16.528517  657716 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1124 03:12:14.168310  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:16.172923  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:18.176795  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:14.828319  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:12:14.828372  656542 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:12:14.828432  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.858092  656542 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:14.858118  656542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:14.858192  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.865650  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.866433  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.895242  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.975501  656542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:14.992389  656542 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:12:15.008151  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:15.016186  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:12:15.016211  656542 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:12:15.031574  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:15.042522  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:12:15.042540  656542 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:12:15.074331  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:12:15.074365  656542 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:12:15.109090  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:12:15.109113  656542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:12:15.128161  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:12:15.128184  656542 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:12:15.147874  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:12:15.147903  656542 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:12:15.168191  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:12:15.168211  656542 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:12:15.185637  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:12:15.185661  656542 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:12:15.202994  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:15.203016  656542 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:12:15.221608  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:17.996962  656542 node_ready.go:49] node "default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:17.997067  656542 node_ready.go:38] duration metric: took 3.004589581s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:12:17.997096  656542 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:17.997184  656542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:18.834613  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.826385361s)
	I1124 03:12:18.834690  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.803092411s)
	I1124 03:12:18.834853  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.613213665s)
	I1124 03:12:18.834988  656542 api_server.go:72] duration metric: took 4.047778988s to wait for apiserver process to appear ...
	I1124 03:12:18.835771  656542 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:18.835800  656542 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:12:18.838614  656542 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993813 addons enable metrics-server
	
	I1124 03:12:18.844882  656542 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 03:12:17.043130  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:12:17.043165  658811 ubuntu.go:182] provisioning hostname "embed-certs-284604"
	I1124 03:12:17.043247  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.069679  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:17.070109  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:17.070142  658811 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-284604 && echo "embed-certs-284604" | sudo tee /etc/hostname
	I1124 03:12:17.259114  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:12:17.259199  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.284082  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:17.284399  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:17.284433  658811 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-284604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-284604/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-284604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:17.452374  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:17.452411  658811 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:17.452438  658811 ubuntu.go:190] setting up certificates
	I1124 03:12:17.452452  658811 provision.go:84] configureAuth start
	I1124 03:12:17.452521  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:17.483434  658811 provision.go:143] copyHostCerts
	I1124 03:12:17.483502  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:17.483519  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:17.483580  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:17.483712  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:17.483725  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:17.483764  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:17.483851  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:17.483858  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:17.483909  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:17.483990  658811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.embed-certs-284604 san=[127.0.0.1 192.168.94.2 embed-certs-284604 localhost minikube]
	I1124 03:12:17.911206  658811 provision.go:177] copyRemoteCerts
	I1124 03:12:17.911335  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:17.911394  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.943914  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.069938  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:18.098447  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:12:18.124997  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:12:18.162531  658811 provision.go:87] duration metric: took 710.055135ms to configureAuth
	I1124 03:12:18.162560  658811 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:18.162764  658811 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:18.162877  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.187248  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:18.187553  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:18.187575  658811 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:18.557227  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:18.557257  658811 machine.go:97] duration metric: took 4.723983027s to provisionDockerMachine
	I1124 03:12:18.557270  658811 client.go:176] duration metric: took 9.315155053s to LocalClient.Create
	I1124 03:12:18.557286  658811 start.go:167] duration metric: took 9.315214435s to libmachine.API.Create "embed-certs-284604"
	I1124 03:12:18.557298  658811 start.go:293] postStartSetup for "embed-certs-284604" (driver="docker")
	I1124 03:12:18.557310  658811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:18.557379  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:18.557432  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.587404  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.715877  658811 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:18.721275  658811 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:18.721309  658811 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:18.721322  658811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:18.721381  658811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:18.721473  658811 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:18.721597  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:18.732645  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:18.763370  658811 start.go:296] duration metric: took 206.056597ms for postStartSetup
	I1124 03:12:18.763732  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:18.791899  658811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:12:18.792183  658811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:18.792233  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.820806  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.936530  658811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:18.948570  658811 start.go:128] duration metric: took 9.708372989s to createHost
	I1124 03:12:18.948686  658811 start.go:83] releasing machines lock for "embed-certs-284604", held for 9.708587492s
	I1124 03:12:18.948771  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:18.973190  658811 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:18.973375  658811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:18.973512  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.973582  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.998620  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.999698  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.845938  656542 addons.go:530] duration metric: took 4.058450553s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 03:12:18.846295  656542 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:18.846717  656542 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:12:19.335969  656542 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:12:19.342155  656542 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1124 03:12:19.343392  656542 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:19.343421  656542 api_server.go:131] duration metric: took 507.639836ms to wait for apiserver health ...
	I1124 03:12:19.343433  656542 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:19.347170  656542 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:19.347220  656542 system_pods.go:61] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:19.347233  656542 system_pods.go:61] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:19.347244  656542 system_pods.go:61] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:19.347253  656542 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:19.347263  656542 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:19.347271  656542 system_pods.go:61] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:19.347279  656542 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:19.347290  656542 system_pods.go:61] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:19.347300  656542 system_pods.go:74] duration metric: took 3.857291ms to wait for pod list to return data ...
	I1124 03:12:19.347309  656542 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:19.350005  656542 default_sa.go:45] found service account: "default"
	I1124 03:12:19.350027  656542 default_sa.go:55] duration metric: took 2.709767ms for default service account to be created ...
	I1124 03:12:19.350036  656542 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:19.354450  656542 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:19.354480  656542 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:19.354492  656542 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:19.354502  656542 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:19.354512  656542 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:19.354525  656542 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:19.354534  656542 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:19.354542  656542 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:19.354550  656542 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:19.354560  656542 system_pods.go:126] duration metric: took 4.516416ms to wait for k8s-apps to be running ...
	I1124 03:12:19.354569  656542 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:19.354617  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:19.377699  656542 system_svc.go:56] duration metric: took 23.119925ms WaitForService to wait for kubelet
	I1124 03:12:19.377726  656542 kubeadm.go:587] duration metric: took 4.590516557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:19.377808  656542 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:19.381785  656542 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:19.381815  656542 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:19.381831  656542 node_conditions.go:105] duration metric: took 4.017737ms to run NodePressure ...
	I1124 03:12:19.381846  656542 start.go:242] waiting for startup goroutines ...
	I1124 03:12:19.381857  656542 start.go:247] waiting for cluster config update ...
	I1124 03:12:19.381883  656542 start.go:256] writing updated cluster config ...
	I1124 03:12:19.382229  656542 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:19.387932  656542 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:19.394333  656542 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:16.529636  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:12:16.529826  657716 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:12:16.529877  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.529719  657716 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:16.530024  657716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:16.530070  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.534729  657716 addons.go:239] Setting addon default-storageclass=true in "no-preload-603010"
	W1124 03:12:16.534754  657716 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:12:16.534783  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.539339  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.565768  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.582397  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.585042  657716 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:16.585070  657716 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:16.585126  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.617946  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.706410  657716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:16.731745  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:12:16.731773  657716 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:12:16.736337  657716 node_ready.go:35] waiting up to 6m0s for node "no-preload-603010" to be "Ready" ...
	I1124 03:12:16.736937  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:16.758823  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:12:16.758847  657716 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:12:16.768684  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:16.788344  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:12:16.788369  657716 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:12:16.806593  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:12:16.806620  657716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:12:16.847576  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:12:16.847609  657716 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:12:16.867721  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:12:16.867755  657716 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:12:16.886765  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:12:16.886787  657716 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:12:16.907569  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:12:16.907732  657716 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:12:16.929396  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:16.929417  657716 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:12:16.958374  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:19.957067  657716 node_ready.go:49] node "no-preload-603010" is "Ready"
	I1124 03:12:19.957111  657716 node_ready.go:38] duration metric: took 3.220732108s for node "no-preload-603010" to be "Ready" ...
	I1124 03:12:19.957131  657716 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:19.957256  657716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:20.880814  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.143842388s)
	I1124 03:12:20.881241  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.112181993s)
	I1124 03:12:21.157660  657716 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.200376454s)
	I1124 03:12:21.157703  657716 api_server.go:72] duration metric: took 4.664681444s to wait for apiserver process to appear ...
	I1124 03:12:21.157713  657716 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:21.157733  657716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:12:21.158403  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.199980339s)
	I1124 03:12:21.160177  657716 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-603010 addons enable metrics-server
	
	I1124 03:12:21.161363  657716 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 03:12:19.120481  658811 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:19.211741  658811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:19.277394  658811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:19.284078  658811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:19.284149  658811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:19.319995  658811 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:12:19.320028  658811 start.go:496] detecting cgroup driver to use...
	I1124 03:12:19.320064  658811 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:19.320117  658811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:19.345823  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:19.367716  658811 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:19.367782  658811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:19.389799  658811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:19.412438  658811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:19.524730  658811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:19.637210  658811 docker.go:234] disabling docker service ...
	I1124 03:12:19.637286  658811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:19.659861  658811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:19.677152  658811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:19.823448  658811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:19.960707  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:19.981616  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:20.012418  658811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:20.012486  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.058077  658811 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:20.058214  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.074742  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.118587  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.135044  658811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:20.151861  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.172656  658811 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.194765  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.232792  658811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:20.242855  658811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:20.253417  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:20.371692  658811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:21.221343  658811 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:21.221440  658811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:21.226905  658811 start.go:564] Will wait 60s for crictl version
	I1124 03:12:21.227016  658811 ssh_runner.go:195] Run: which crictl
	I1124 03:12:21.231693  658811 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:21.262514  658811 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:21.262603  658811 ssh_runner.go:195] Run: crio --version
	I1124 03:12:21.302192  658811 ssh_runner.go:195] Run: crio --version
	I1124 03:12:21.363037  658811 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:12:21.162777  657716 addons.go:530] duration metric: took 4.669427095s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 03:12:21.163688  657716 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:21.163718  657716 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:20.668896  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:23.167980  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:21.364543  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:21.388019  658811 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:21.393290  658811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:21.406629  658811 kubeadm.go:884] updating cluster {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:21.406778  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:21.406846  658811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:21.445258  658811 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:21.445284  658811 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:12:21.445336  658811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:21.471000  658811 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:21.471025  658811 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:21.471037  658811 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:12:21.471125  658811 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-284604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:21.471186  658811 ssh_runner.go:195] Run: crio config
	I1124 03:12:21.516457  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:21.516480  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:21.516502  658811 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:21.516532  658811 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-284604 NodeName:embed-certs-284604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:21.516680  658811 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-284604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:21.516751  658811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:21.524967  658811 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:21.525035  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:21.533487  658811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 03:12:21.547228  658811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:21.640415  658811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 03:12:21.656434  658811 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:21.660696  658811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:21.674410  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:21.772584  658811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:21.798340  658811 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604 for IP: 192.168.94.2
	I1124 03:12:21.798360  658811 certs.go:195] generating shared ca certs ...
	I1124 03:12:21.798381  658811 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.798539  658811 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:21.798593  658811 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:21.798607  658811 certs.go:257] generating profile certs ...
	I1124 03:12:21.798690  658811 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key
	I1124 03:12:21.798708  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt with IP's: []
	I1124 03:12:21.837756  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt ...
	I1124 03:12:21.837790  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt: {Name:mk6d8aec213556beda470e3e5188eed1aec5e183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.838000  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key ...
	I1124 03:12:21.838030  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key: {Name:mk56f44e1d331f82a560e15fe6a3c3ca4602bba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.838172  658811 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087
	I1124 03:12:21.838189  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 03:12:21.915471  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 ...
	I1124 03:12:21.915494  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087: {Name:mk185605a13bb00cdff0decbde0063003287a88f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.915630  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087 ...
	I1124 03:12:21.915643  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087: {Name:mk1404f69a73d575873220c9d20779709c9db66b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.915715  658811 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt
	I1124 03:12:21.915784  658811 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key
	I1124 03:12:21.915837  658811 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key
	I1124 03:12:21.915852  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt with IP's: []
	I1124 03:12:22.064876  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt ...
	I1124 03:12:22.064923  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt: {Name:mk7bbfb718db4eee243d6b6658f5b6db725b34b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:22.065108  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key ...
	I1124 03:12:22.065140  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key: {Name:mk282c31a6bdbd1f185d5fa986bb6679f789f94a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:22.065488  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:22.065564  658811 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:22.065576  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:22.065602  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:22.065630  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:22.065654  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:22.065702  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:22.066383  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:22.086471  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:22.103602  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:22.120085  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:22.137488  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:12:22.154084  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:22.171055  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:22.187877  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:12:22.204407  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:22.222560  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:22.241380  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:22.258066  658811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:22.269950  658811 ssh_runner.go:195] Run: openssl version
	I1124 03:12:22.276120  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:22.283870  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.287375  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.287414  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.321400  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:22.329479  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:22.338113  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.342815  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.342865  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.384524  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:22.393408  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:22.402946  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.406951  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.407009  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.445501  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:22.454521  658811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:22.458152  658811 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:12:22.458212  658811 kubeadm.go:401] StartCluster: {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:22.458278  658811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:22.458330  658811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:22.487574  658811 cri.go:89] found id: ""
	I1124 03:12:22.487653  658811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:22.495876  658811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:12:22.505058  658811 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:12:22.505121  658811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:12:22.515162  658811 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:12:22.515181  658811 kubeadm.go:158] found existing configuration files:
	
	I1124 03:12:22.515229  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:12:22.525864  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:12:22.525956  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:12:22.535632  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:12:22.545975  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:12:22.546068  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:12:22.556144  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:12:22.566062  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:12:22.566123  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:12:22.576364  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:12:22.587041  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:12:22.587089  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:12:22.596656  658811 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:12:22.678370  658811 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:12:22.762592  658811 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1124 03:12:21.400229  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:23.400859  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:21.658606  657716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:12:21.664294  657716 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:12:21.665654  657716 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:21.665685  657716 api_server.go:131] duration metric: took 507.965368ms to wait for apiserver health ...
	I1124 03:12:21.665696  657716 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:21.669523  657716 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:21.669569  657716 system_pods.go:61] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:21.669584  657716 system_pods.go:61] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:21.669600  657716 system_pods.go:61] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:21.669613  657716 system_pods.go:61] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:21.669620  657716 system_pods.go:61] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:21.669631  657716 system_pods.go:61] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:21.669640  657716 system_pods.go:61] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:21.669651  657716 system_pods.go:61] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:21.669661  657716 system_pods.go:74] duration metric: took 3.958242ms to wait for pod list to return data ...
	I1124 03:12:21.669744  657716 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:21.672641  657716 default_sa.go:45] found service account: "default"
	I1124 03:12:21.672665  657716 default_sa.go:55] duration metric: took 2.912794ms for default service account to be created ...
	I1124 03:12:21.672674  657716 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:21.676337  657716 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:21.676367  657716 system_pods.go:89] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:21.676379  657716 system_pods.go:89] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:21.676394  657716 system_pods.go:89] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:21.676403  657716 system_pods.go:89] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:21.676411  657716 system_pods.go:89] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:21.676422  657716 system_pods.go:89] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:21.676433  657716 system_pods.go:89] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:21.676441  657716 system_pods.go:89] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:21.676450  657716 system_pods.go:126] duration metric: took 3.770261ms to wait for k8s-apps to be running ...
	I1124 03:12:21.676459  657716 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:21.676504  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:21.690659  657716 system_svc.go:56] duration metric: took 14.192089ms WaitForService to wait for kubelet
	I1124 03:12:21.690686  657716 kubeadm.go:587] duration metric: took 5.197662584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:21.690707  657716 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:21.693136  657716 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:21.693164  657716 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:21.693184  657716 node_conditions.go:105] duration metric: took 2.469957ms to run NodePressure ...
	I1124 03:12:21.693203  657716 start.go:242] waiting for startup goroutines ...
	I1124 03:12:21.693215  657716 start.go:247] waiting for cluster config update ...
	I1124 03:12:21.693239  657716 start.go:256] writing updated cluster config ...
	I1124 03:12:21.693532  657716 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:21.697901  657716 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:21.701025  657716 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:12:23.706826  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:25.707596  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:25.168947  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:27.669069  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:25.402048  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:27.901054  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:27.707794  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:29.710379  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:29.675678  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:32.166267  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:34.784594  658811 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:12:34.784648  658811 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:12:34.784736  658811 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:12:34.784810  658811 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:12:34.784870  658811 kubeadm.go:319] OS: Linux
	I1124 03:12:34.784983  658811 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:12:34.785059  658811 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:12:34.785107  658811 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:12:34.785166  658811 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:12:34.785237  658811 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:12:34.785303  658811 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:12:34.785372  658811 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:12:34.785441  658811 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:12:34.785518  658811 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:12:34.785647  658811 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:12:34.785738  658811 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:12:34.785806  658811 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:12:34.786978  658811 out.go:252]   - Generating certificates and keys ...
	I1124 03:12:34.787057  658811 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:12:34.787166  658811 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:12:34.787260  658811 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:12:34.787314  658811 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:12:34.787380  658811 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:12:34.787463  658811 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:12:34.787510  658811 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:12:34.787654  658811 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-284604 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:12:34.787713  658811 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:12:34.787835  658811 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-284604 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:12:34.787929  658811 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:12:34.787996  658811 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:12:34.788075  658811 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:12:34.788161  658811 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:12:34.788246  658811 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:12:34.788307  658811 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:12:34.788377  658811 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:12:34.788464  658811 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:12:34.788510  658811 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:12:34.788574  658811 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:12:34.788677  658811 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:12:34.789842  658811 out.go:252]   - Booting up control plane ...
	I1124 03:12:34.789955  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:12:34.790029  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:12:34.790102  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:12:34.790202  658811 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:12:34.790286  658811 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:12:34.790369  658811 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:12:34.790438  658811 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:12:34.790470  658811 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:12:34.790573  658811 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:12:34.790662  658811 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:12:34.790715  658811 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001939634s
	I1124 03:12:34.790808  658811 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:12:34.790874  658811 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1124 03:12:34.790987  658811 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:12:34.791057  658811 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:12:34.791109  658811 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.83516238s
	I1124 03:12:34.791172  658811 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.120221493s
	I1124 03:12:34.791231  658811 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501624476s
	I1124 03:12:34.791319  658811 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:12:34.791443  658811 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:12:34.791516  658811 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:12:34.791778  658811 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-284604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:12:34.791865  658811 kubeadm.go:319] [bootstrap-token] Using token: 6opk0j.95uwfc60sd8szhpc
	I1124 03:12:34.793026  658811 out.go:252]   - Configuring RBAC rules ...
	I1124 03:12:34.793125  658811 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:12:34.793213  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:12:34.793344  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:12:34.793455  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:12:34.793557  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:12:34.793642  658811 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:12:34.793774  658811 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:12:34.793810  658811 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:12:34.793851  658811 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:12:34.793857  658811 kubeadm.go:319] 
	I1124 03:12:34.793964  658811 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:12:34.793973  658811 kubeadm.go:319] 
	I1124 03:12:34.794046  658811 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:12:34.794053  658811 kubeadm.go:319] 
	I1124 03:12:34.794074  658811 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:12:34.794151  658811 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:12:34.794229  658811 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:12:34.794239  658811 kubeadm.go:319] 
	I1124 03:12:34.794318  658811 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:12:34.794327  658811 kubeadm.go:319] 
	I1124 03:12:34.794375  658811 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:12:34.794381  658811 kubeadm.go:319] 
	I1124 03:12:34.794424  658811 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:12:34.794490  658811 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:12:34.794554  658811 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:12:34.794560  658811 kubeadm.go:319] 
	I1124 03:12:34.794633  658811 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:12:34.794705  658811 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:12:34.794712  658811 kubeadm.go:319] 
	I1124 03:12:34.794781  658811 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6opk0j.95uwfc60sd8szhpc \
	I1124 03:12:34.794955  658811 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:12:34.794990  658811 kubeadm.go:319] 	--control-plane 
	I1124 03:12:34.794996  658811 kubeadm.go:319] 
	I1124 03:12:34.795133  658811 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:12:34.795142  658811 kubeadm.go:319] 
	I1124 03:12:34.795208  658811 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6opk0j.95uwfc60sd8szhpc \
	I1124 03:12:34.795304  658811 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:12:34.795316  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:34.795322  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:34.796503  658811 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1124 03:12:29.901574  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:32.399665  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:32.206353  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:34.206828  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:34.667383  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:35.167626  650744 pod_ready.go:94] pod "coredns-5dd5756b68-5nwx9" is "Ready"
	I1124 03:12:35.167652  650744 pod_ready.go:86] duration metric: took 36.006547637s for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.170471  650744 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.174915  650744 pod_ready.go:94] pod "etcd-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.174952  650744 pod_ready.go:86] duration metric: took 4.460425ms for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.178276  650744 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.181797  650744 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.181815  650744 pod_ready.go:86] duration metric: took 3.521385ms for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.184086  650744 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.364640  650744 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.364666  650744 pod_ready.go:86] duration metric: took 180.561055ms for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.566321  650744 pod_ready.go:83] waiting for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.965760  650744 pod_ready.go:94] pod "kube-proxy-r82jh" is "Ready"
	I1124 03:12:35.965786  650744 pod_ready.go:86] duration metric: took 399.441601ms for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.166112  650744 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.564858  650744 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-579951" is "Ready"
	I1124 03:12:36.564911  650744 pod_ready.go:86] duration metric: took 398.774389ms for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.564927  650744 pod_ready.go:40] duration metric: took 37.40842222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:36.606666  650744 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 03:12:36.609650  650744 out.go:203] 
	W1124 03:12:36.610839  650744 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 03:12:36.611943  650744 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:12:36.613009  650744 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-579951" cluster and "default" namespace by default
	I1124 03:12:34.797545  658811 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:12:34.801904  658811 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:12:34.801919  658811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:12:34.815659  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:12:35.008985  658811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:12:35.009118  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-284604 minikube.k8s.io/updated_at=2025_11_24T03_12_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=embed-certs-284604 minikube.k8s.io/primary=true
	I1124 03:12:35.009137  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:35.019423  658811 ops.go:34] apiserver oom_adj: -16
	I1124 03:12:35.098937  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:35.600025  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:36.099882  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:36.599914  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:37.099714  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:37.599861  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:38.098989  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:38.599248  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.099379  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.599598  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.664570  658811 kubeadm.go:1114] duration metric: took 4.655535544s to wait for elevateKubeSystemPrivileges
	I1124 03:12:39.664621  658811 kubeadm.go:403] duration metric: took 17.206413974s to StartCluster
	I1124 03:12:39.664642  658811 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:39.664720  658811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:39.666858  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:39.667137  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:12:39.667148  658811 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:39.667230  658811 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:39.667331  658811 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-284604"
	I1124 03:12:39.667356  658811 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-284604"
	I1124 03:12:39.667360  658811 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:39.667396  658811 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:12:39.667427  658811 addons.go:70] Setting default-storageclass=true in profile "embed-certs-284604"
	I1124 03:12:39.667451  658811 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-284604"
	I1124 03:12:39.667810  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.667990  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.668614  658811 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:39.670239  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:39.693324  658811 addons.go:239] Setting addon default-storageclass=true in "embed-certs-284604"
	I1124 03:12:39.693377  658811 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:12:39.693617  658811 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 03:12:34.900232  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:36.901987  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:39.399311  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:39.693843  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.695301  658811 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:39.695324  658811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:39.695401  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:39.723273  658811 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:39.723298  658811 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:39.723378  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:39.730678  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:39.746663  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:39.790082  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:12:39.807223  658811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:39.854663  658811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:39.859938  658811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:39.988561  658811 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 03:12:39.990213  658811 node_ready.go:35] waiting up to 6m0s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:12:40.170444  658811 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1124 03:12:36.707151  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:39.206261  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:41.206507  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	I1124 03:12:40.171595  658811 addons.go:530] duration metric: took 504.363947ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:12:40.492653  658811 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-284604" context rescaled to 1 replicas
	W1124 03:12:41.992667  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:43.993353  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:41.399566  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:43.899302  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:43.705614  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:45.706618  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:45.993493  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:47.993708  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:46.399440  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:48.399607  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:48.205812  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:50.206724  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:50.493353  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	I1124 03:12:50.993323  658811 node_ready.go:49] node "embed-certs-284604" is "Ready"
	I1124 03:12:50.993350  658811 node_ready.go:38] duration metric: took 11.003110454s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:12:50.993367  658811 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:50.993411  658811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:51.005273  658811 api_server.go:72] duration metric: took 11.338089025s to wait for apiserver process to appear ...
	I1124 03:12:51.005299  658811 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:51.005319  658811 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:12:51.010460  658811 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:12:51.011346  658811 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:51.011367  658811 api_server.go:131] duration metric: took 6.06186ms to wait for apiserver health ...
	I1124 03:12:51.011376  658811 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:51.014056  658811 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:51.014084  658811 system_pods.go:61] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.014092  658811 system_pods.go:61] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.014101  658811 system_pods.go:61] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.014106  658811 system_pods.go:61] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.014113  658811 system_pods.go:61] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.014119  658811 system_pods.go:61] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.014136  658811 system_pods.go:61] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.014147  658811 system_pods.go:61] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.014155  658811 system_pods.go:74] duration metric: took 2.773001ms to wait for pod list to return data ...
	I1124 03:12:51.014164  658811 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:51.016349  658811 default_sa.go:45] found service account: "default"
	I1124 03:12:51.016366  658811 default_sa.go:55] duration metric: took 2.196577ms for default service account to be created ...
	I1124 03:12:51.016373  658811 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:51.018741  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.018763  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.018768  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.018774  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.018778  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.018783  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.018787  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.018791  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.018798  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.018817  658811 retry.go:31] will retry after 267.963041ms: missing components: kube-dns
	I1124 03:12:51.291183  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.291223  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.291231  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.291239  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.291244  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.291250  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.291255  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.291260  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.291268  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.291295  658811 retry.go:31] will retry after 316.287047ms: missing components: kube-dns
	I1124 03:12:51.610985  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.611019  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.611026  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.611037  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.611045  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.611055  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.611061  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.611066  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.611074  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.611098  658811 retry.go:31] will retry after 440.03042ms: missing components: kube-dns
	I1124 03:12:52.054793  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:52.054821  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:52.054826  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:52.054831  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:52.054835  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:52.054839  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:52.054842  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:52.054845  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:52.054850  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:52.054863  658811 retry.go:31] will retry after 498.386661ms: missing components: kube-dns
	I1124 03:12:52.557040  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:52.557071  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Running
	I1124 03:12:52.557079  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:52.557084  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:52.557089  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:52.557095  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:52.557100  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:52.557104  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:52.557110  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Running
	I1124 03:12:52.557120  658811 system_pods.go:126] duration metric: took 1.540739928s to wait for k8s-apps to be running ...
	I1124 03:12:52.557134  658811 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:52.557188  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:52.570482  658811 system_svc.go:56] duration metric: took 13.341226ms WaitForService to wait for kubelet
	I1124 03:12:52.570511  658811 kubeadm.go:587] duration metric: took 12.903331916s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:52.570535  658811 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:52.573089  658811 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:52.573117  658811 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:52.573148  658811 node_conditions.go:105] duration metric: took 2.605161ms to run NodePressure ...
	I1124 03:12:52.573166  658811 start.go:242] waiting for startup goroutines ...
	I1124 03:12:52.573175  658811 start.go:247] waiting for cluster config update ...
	I1124 03:12:52.573187  658811 start.go:256] writing updated cluster config ...
	I1124 03:12:52.573408  658811 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:52.576899  658811 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:52.580189  658811 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-89mzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.584242  658811 pod_ready.go:94] pod "coredns-66bc5c9577-89mzc" is "Ready"
	I1124 03:12:52.584262  658811 pod_ready.go:86] duration metric: took 4.045428ms for pod "coredns-66bc5c9577-89mzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.586066  658811 pod_ready.go:83] waiting for pod "etcd-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.590045  658811 pod_ready.go:94] pod "etcd-embed-certs-284604" is "Ready"
	I1124 03:12:52.590064  658811 pod_ready.go:86] duration metric: took 3.981268ms for pod "etcd-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.592126  658811 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.595532  658811 pod_ready.go:94] pod "kube-apiserver-embed-certs-284604" is "Ready"
	I1124 03:12:52.595555  658811 pod_ready.go:86] duration metric: took 3.408619ms for pod "kube-apiserver-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.597386  658811 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.980512  658811 pod_ready.go:94] pod "kube-controller-manager-embed-certs-284604" is "Ready"
	I1124 03:12:52.980538  658811 pod_ready.go:86] duration metric: took 383.129867ms for pod "kube-controller-manager-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.181479  658811 pod_ready.go:83] waiting for pod "kube-proxy-bn8fd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.581552  658811 pod_ready.go:94] pod "kube-proxy-bn8fd" is "Ready"
	I1124 03:12:53.581575  658811 pod_ready.go:86] duration metric: took 400.07394ms for pod "kube-proxy-bn8fd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.781409  658811 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.181669  658811 pod_ready.go:94] pod "kube-scheduler-embed-certs-284604" is "Ready"
	I1124 03:12:54.181696  658811 pod_ready.go:86] duration metric: took 400.263506ms for pod "kube-scheduler-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.181712  658811 pod_ready.go:40] duration metric: took 1.604781402s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:54.228480  658811 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:54.231260  658811 out.go:179] * Done! kubectl is now configured to use "embed-certs-284604" cluster and "default" namespace by default
	W1124 03:12:50.399926  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:52.400576  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:52.900171  656542 pod_ready.go:94] pod "coredns-66bc5c9577-w62hm" is "Ready"
	I1124 03:12:52.900193  656542 pod_ready.go:86] duration metric: took 33.505834176s for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.903110  656542 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.907513  656542 pod_ready.go:94] pod "etcd-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:52.907539  656542 pod_ready.go:86] duration metric: took 4.401311ms for pod "etcd-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.909400  656542 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.913156  656542 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:52.913178  656542 pod_ready.go:86] duration metric: took 3.755745ms for pod "kube-apiserver-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.914951  656542 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.098380  656542 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:53.098409  656542 pod_ready.go:86] duration metric: took 183.435612ms for pod "kube-controller-manager-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.298588  656542 pod_ready.go:83] waiting for pod "kube-proxy-xgjzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.698811  656542 pod_ready.go:94] pod "kube-proxy-xgjzs" is "Ready"
	I1124 03:12:53.698835  656542 pod_ready.go:86] duration metric: took 400.225655ms for pod "kube-proxy-xgjzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.898023  656542 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.299083  656542 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:54.299107  656542 pod_ready.go:86] duration metric: took 401.0576ms for pod "kube-scheduler-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.299119  656542 pod_ready.go:40] duration metric: took 34.911155437s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:54.345901  656542 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:54.347541  656542 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-993813" cluster and "default" namespace by default
	W1124 03:12:52.208247  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:54.707505  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	I1124 03:12:56.206822  657716 pod_ready.go:94] pod "coredns-66bc5c9577-9n5xf" is "Ready"
	I1124 03:12:56.206857  657716 pod_ready.go:86] duration metric: took 34.50580389s for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.209449  657716 pod_ready.go:83] waiting for pod "etcd-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.213288  657716 pod_ready.go:94] pod "etcd-no-preload-603010" is "Ready"
	I1124 03:12:56.213310  657716 pod_ready.go:86] duration metric: took 3.839555ms for pod "etcd-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.215450  657716 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.219181  657716 pod_ready.go:94] pod "kube-apiserver-no-preload-603010" is "Ready"
	I1124 03:12:56.219201  657716 pod_ready.go:86] duration metric: took 3.726981ms for pod "kube-apiserver-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.221198  657716 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.404873  657716 pod_ready.go:94] pod "kube-controller-manager-no-preload-603010" is "Ready"
	I1124 03:12:56.404930  657716 pod_ready.go:86] duration metric: took 183.709106ms for pod "kube-controller-manager-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.605567  657716 pod_ready.go:83] waiting for pod "kube-proxy-swj6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.005571  657716 pod_ready.go:94] pod "kube-proxy-swj6c" is "Ready"
	I1124 03:12:57.005598  657716 pod_ready.go:86] duration metric: took 400.0046ms for pod "kube-proxy-swj6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.205842  657716 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.605312  657716 pod_ready.go:94] pod "kube-scheduler-no-preload-603010" is "Ready"
	I1124 03:12:57.605336  657716 pod_ready.go:86] duration metric: took 399.465818ms for pod "kube-scheduler-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.605349  657716 pod_ready.go:40] duration metric: took 35.907419342s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:57.646839  657716 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:57.648681  657716 out.go:179] * Done! kubectl is now configured to use "no-preload-603010" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 03:12:39 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:39.937644213Z" level=info msg="Started container" PID=1780 containerID=2a53443a2e458d26b2cc71c84ea3aa455507f32127c9396c9cc57b3c3f221fff description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc/dashboard-metrics-scraper id=58813a65-06ca-4c3d-ada5-22ffc0e9f19c name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c11264046058ec32796ed66d5a5f539aa2c70db3f84a08174acffea0d9ae4ae
	Nov 24 03:12:40 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:40.01197076Z" level=info msg="Removing container: f8ef1f8a04fdcf7d5bdcdf829f53f2685fc217c3b21d1d97d329b346413f8050" id=27c0c254-13a3-40b8-bbe8-7bb9ced82646 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:12:40 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:40.023547124Z" level=info msg="Removed container f8ef1f8a04fdcf7d5bdcdf829f53f2685fc217c3b21d1d97d329b346413f8050: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc/dashboard-metrics-scraper" id=27c0c254-13a3-40b8-bbe8-7bb9ced82646 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.037246376Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=628bc465-ea33-494f-a52a-4e846d0d73fd name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.03817902Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a7bfc460-020a-40a8-b37a-741687db26c4 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.039227143Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7a11f683-9dc8-49a1-a4ff-389cf3b430b3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.039360672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.043620789Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.04384565Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7201f8caf35f2c261be5886fec7bf6746c4d8a96af3105a8274cfe986814166f/merged/etc/passwd: no such file or directory"
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.043882019Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7201f8caf35f2c261be5886fec7bf6746c4d8a96af3105a8274cfe986814166f/merged/etc/group: no such file or directory"
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.044506404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.075554406Z" level=info msg="Created container 573f6a7cb37360f3f1fa23cf31377d96a60d7bd6c0e83b06a385f1fcb540e103: kube-system/storage-provisioner/storage-provisioner" id=7a11f683-9dc8-49a1-a4ff-389cf3b430b3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.076026572Z" level=info msg="Starting container: 573f6a7cb37360f3f1fa23cf31377d96a60d7bd6c0e83b06a385f1fcb540e103" id=51d1b019-fc82-4ec2-8c89-6c668aeb933f name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.077871675Z" level=info msg="Started container" PID=1794 containerID=573f6a7cb37360f3f1fa23cf31377d96a60d7bd6c0e83b06a385f1fcb540e103 description=kube-system/storage-provisioner/storage-provisioner id=51d1b019-fc82-4ec2-8c89-6c668aeb933f name=/runtime.v1.RuntimeService/StartContainer sandboxID=686fe9ea8a0761a38c8280fefebba5eaf19b0ef59f2c9e330f025c70af33cab3
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.885038451Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0bec34c6-976f-4f98-883d-769ded261286 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.885935881Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c55682c5-c5a3-4ffc-8793-6c5c47fa3042 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.886909446Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc/dashboard-metrics-scraper" id=5eea27b3-b132-4fb1-bee0-c8818ae41919 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.887064638Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.892203692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.892657957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.925623908Z" level=info msg="Created container ca56ee1046dfd67567192ecb6131590bdde60902b351bebce38f90604b67c2ba: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc/dashboard-metrics-scraper" id=5eea27b3-b132-4fb1-bee0-c8818ae41919 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.926135394Z" level=info msg="Starting container: ca56ee1046dfd67567192ecb6131590bdde60902b351bebce38f90604b67c2ba" id=76cb4486-a5da-46d1-af56-7aa40bccbfc4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.928180312Z" level=info msg="Started container" PID=1829 containerID=ca56ee1046dfd67567192ecb6131590bdde60902b351bebce38f90604b67c2ba description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc/dashboard-metrics-scraper id=76cb4486-a5da-46d1-af56-7aa40bccbfc4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c11264046058ec32796ed66d5a5f539aa2c70db3f84a08174acffea0d9ae4ae
	Nov 24 03:13:01 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:01.068960759Z" level=info msg="Removing container: 2a53443a2e458d26b2cc71c84ea3aa455507f32127c9396c9cc57b3c3f221fff" id=45e2ef1f-0851-4e43-b26a-1d66b2ab2f43 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:13:01 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:01.077826021Z" level=info msg="Removed container 2a53443a2e458d26b2cc71c84ea3aa455507f32127c9396c9cc57b3c3f221fff: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc/dashboard-metrics-scraper" id=45e2ef1f-0851-4e43-b26a-1d66b2ab2f43 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	ca56ee1046dfd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           8 seconds ago       Exited              dashboard-metrics-scraper   3                   6c11264046058       dashboard-metrics-scraper-6ffb444bf9-z8ltc             kubernetes-dashboard
	573f6a7cb3736       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   686fe9ea8a076       storage-provisioner                                    kube-system
	93cf9607a612f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   2a3fe2164e017       kubernetes-dashboard-855c9754f9-6tmlg                  kubernetes-dashboard
	578ada64e7018       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   2def9d1d1de0a       busybox                                                default
	4215d37d945b0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   62500196156fb       coredns-66bc5c9577-w62hm                               kube-system
	1bd7fbd7ac730       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   686fe9ea8a076       storage-provisioner                                    kube-system
	e9aedcc7b2f45       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   0270d3fa8beb2       kindnet-w6sh6                                          kube-system
	98b77ba6e3b6b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   f52f3cc9ad4ab       kube-proxy-xgjzs                                       kube-system
	9d08a55f25f2d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   aaebdbc47c617       kube-apiserver-default-k8s-diff-port-993813            kube-system
	a7d5f73dd018d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   9579ed6acdd5e       kube-scheduler-default-k8s-diff-port-993813            kube-system
	dd990c6cdcef7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   1e9189d8cc74c       kube-controller-manager-default-k8s-diff-port-993813   kube-system
	11357ba44da74       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   9f5bd76a8d024       etcd-default-k8s-diff-port-993813                      kube-system
	
	
	==> coredns [4215d37d945b02ffa680f6a88a284357077e2085850453212142af5a50e8e540] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38348 - 45319 "HINFO IN 5865592854072147901.8469372331643766163. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.484958805s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-993813
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-993813
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=default-k8s-diff-port-993813
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_11_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:11:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-993813
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:12:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:12:58 +0000   Mon, 24 Nov 2025 03:11:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:12:58 +0000   Mon, 24 Nov 2025 03:11:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:12:58 +0000   Mon, 24 Nov 2025 03:11:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:12:58 +0000   Mon, 24 Nov 2025 03:11:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-993813
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                704691fb-a437-4d94-adeb-2d360c12ce3d
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-w62hm                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-default-k8s-diff-port-993813                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-w6sh6                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-default-k8s-diff-port-993813             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-993813    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-xgjzs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-default-k8s-diff-port-993813             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-z8ltc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6tmlg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 118s)  kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 118s)  kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x8 over 118s)  kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           108s                 node-controller  Node default-k8s-diff-port-993813 event: Registered Node default-k8s-diff-port-993813 in Controller
	  Normal  NodeReady                95s                  kubelet          Node default-k8s-diff-port-993813 status is now: NodeReady
	  Normal  Starting                 56s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  56s (x8 over 56s)    kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    56s (x8 over 56s)    kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     56s (x8 over 56s)    kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                  node-controller  Node default-k8s-diff-port-993813 event: Registered Node default-k8s-diff-port-993813 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [11357ba44da7473554ad2b8f1e58b742de06b9155b164b6c83a5d2f9beb7830e] <==
	{"level":"warn","ts":"2025-11-24T03:12:17.098252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.110415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.117608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.126642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.136871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.148433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.160843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.170419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.180576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.188358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.212288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.218342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.236552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.245316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.260838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.325775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:20.503315Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.167522ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T03:12:20.503556Z","caller":"traceutil/trace.go:172","msg":"trace[666866352] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:516; }","duration":"115.42581ms","start":"2025-11-24T03:12:20.388114Z","end":"2025-11-24T03:12:20.503540Z","steps":["trace[666866352] 'agreement among raft nodes before linearized reading'  (duration: 53.243914ms)","trace[666866352] 'range keys from in-memory index tree'  (duration: 61.891896ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T03:12:20.503462Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.164758ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-w62hm\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-24T03:12:20.503674Z","caller":"traceutil/trace.go:172","msg":"trace[1529185405] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-w62hm; range_end:; response_count:1; response_revision:517; }","duration":"107.369154ms","start":"2025-11-24T03:12:20.396285Z","end":"2025-11-24T03:12:20.503654Z","steps":["trace[1529185405] 'agreement among raft nodes before linearized reading'  (duration: 107.079744ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:20.503446Z","caller":"traceutil/trace.go:172","msg":"trace[1306895237] transaction","detail":"{read_only:false; response_revision:517; number_of_response:1; }","duration":"115.98788ms","start":"2025-11-24T03:12:20.387430Z","end":"2025-11-24T03:12:20.503418Z","steps":["trace[1306895237] 'process raft request'  (duration: 53.983981ms)","trace[1306895237] 'compare'  (duration: 61.856366ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:20.810566Z","caller":"traceutil/trace.go:172","msg":"trace[2065875771] linearizableReadLoop","detail":"{readStateIndex:553; appliedIndex:553; }","duration":"107.206381ms","start":"2025-11-24T03:12:20.703333Z","end":"2025-11-24T03:12:20.810539Z","steps":["trace[2065875771] 'read index received'  (duration: 107.199251ms)","trace[2065875771] 'applied index is now lower than readState.Index'  (duration: 6.474µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T03:12:20.873454Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.089505ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/disruption-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-24T03:12:20.873545Z","caller":"traceutil/trace.go:172","msg":"trace[53039755] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/disruption-controller; range_end:; response_count:1; response_revision:525; }","duration":"170.199895ms","start":"2025-11-24T03:12:20.703330Z","end":"2025-11-24T03:12:20.873530Z","steps":["trace[53039755] 'agreement among raft nodes before linearized reading'  (duration: 107.290629ms)","trace[53039755] 'range keys from in-memory index tree'  (duration: 62.698852ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:20.873662Z","caller":"traceutil/trace.go:172","msg":"trace[298306434] transaction","detail":"{read_only:false; response_revision:526; number_of_response:1; }","duration":"172.208081ms","start":"2025-11-24T03:12:20.701436Z","end":"2025-11-24T03:12:20.873644Z","steps":["trace[298306434] 'process raft request'  (duration: 109.179988ms)","trace[298306434] 'compare'  (duration: 62.857532ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:13:09 up  1:55,  0 user,  load average: 4.25, 4.08, 2.70
	Linux default-k8s-diff-port-993813 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e9aedcc7b2f459c0aa678060a0430af50f95c9ae8cc09573789ea82fcb7fafac] <==
	I1124 03:12:19.517827       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:12:19.518099       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 03:12:19.518258       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:12:19.518277       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:12:19.518321       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:12:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:12:19.723290       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:12:19.723319       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:12:19.723331       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:12:19.723739       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:12:20.023559       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:12:20.023594       1 metrics.go:72] Registering metrics
	I1124 03:12:20.023657       1 controller.go:711] "Syncing nftables rules"
	I1124 03:12:29.724849       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:12:29.726069       1 main.go:301] handling current node
	I1124 03:12:39.730016       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:12:39.730057       1 main.go:301] handling current node
	I1124 03:12:49.723125       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:12:49.723163       1 main.go:301] handling current node
	I1124 03:12:59.723060       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:12:59.723096       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d08a55f25f2dea3825f91aa9365cd9a3093b4582eda50f03b22c2ecc00f92c6] <==
	I1124 03:12:18.047818       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 03:12:18.047844       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 03:12:18.047878       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:12:18.048106       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 03:12:18.048163       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 03:12:18.052552       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 03:12:18.052963       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 03:12:18.053115       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:12:18.060329       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 03:12:18.062946       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:12:18.067768       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 03:12:18.070063       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 03:12:18.070156       1 policy_source.go:240] refreshing policies
	I1124 03:12:18.098962       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:12:18.531798       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:12:18.579924       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:12:18.607630       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:12:18.619260       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:12:18.628405       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:12:18.677331       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.79.15"}
	I1124 03:12:18.695874       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.216.245"}
	I1124 03:12:18.942726       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:12:21.549322       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:12:21.700529       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:12:21.897849       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [dd990c6cdcef7e1e7305b9fc20b7615dfb761cbe8c5d42a1f61c8b41406cd0a7] <==
	I1124 03:12:21.386946       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 03:12:21.387012       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:12:21.390186       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:12:21.392565       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 03:12:21.392648       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:12:21.394846       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:12:21.396497       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:12:21.397581       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 03:12:21.400285       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 03:12:21.402536       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 03:12:21.403734       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 03:12:21.407094       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 03:12:21.407255       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 03:12:21.407367       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 03:12:21.407435       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 03:12:21.407474       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 03:12:21.409409       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 03:12:21.410771       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:12:21.415999       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:12:21.419248       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:12:21.419336       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:12:21.419357       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:12:21.419369       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:12:21.442788       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:12:21.446852       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [98b77ba6e3b6b9a9bb0fd551092cc96efbc1de2ae458e7b1cda2d0aa23b17186] <==
	I1124 03:12:19.314263       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:12:19.380232       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:12:19.480866       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:12:19.480943       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 03:12:19.481014       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:12:19.501376       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:12:19.501517       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:12:19.507214       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:12:19.507765       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:12:19.507841       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:12:19.509500       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:12:19.509540       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:12:19.509694       1 config.go:309] "Starting node config controller"
	I1124 03:12:19.509722       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:12:19.510405       1 config.go:200] "Starting service config controller"
	I1124 03:12:19.510416       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:12:19.510508       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:12:19.510518       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:12:19.610060       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:12:19.611161       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:12:19.611372       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:12:19.611465       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a7d5f73dd018d0ee98a3ef524ea2c83739dc8c07b34a7298ffb1d288db659329] <==
	I1124 03:12:15.485204       1 serving.go:386] Generated self-signed cert in-memory
	W1124 03:12:17.973201       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 03:12:17.973248       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 03:12:17.973260       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 03:12:17.973270       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 03:12:18.025611       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 03:12:18.025645       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:12:18.028552       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:12:18.028636       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:12:18.032698       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 03:12:18.032787       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:12:18.129180       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:12:22 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:22.491651     723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 03:12:24 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:24.957143     723 scope.go:117] "RemoveContainer" containerID="e1d1fde154b8d5e5df9cfa39e9674178a4b900188ee3ff7569088cb072f84098"
	Nov 24 03:12:25 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:25.962856     723 scope.go:117] "RemoveContainer" containerID="e1d1fde154b8d5e5df9cfa39e9674178a4b900188ee3ff7569088cb072f84098"
	Nov 24 03:12:25 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:25.963375     723 scope.go:117] "RemoveContainer" containerID="f8ef1f8a04fdcf7d5bdcdf829f53f2685fc217c3b21d1d97d329b346413f8050"
	Nov 24 03:12:25 default-k8s-diff-port-993813 kubelet[723]: E1124 03:12:25.964017     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z8ltc_kubernetes-dashboard(7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc" podUID="7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6"
	Nov 24 03:12:26 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:26.968782     723 scope.go:117] "RemoveContainer" containerID="f8ef1f8a04fdcf7d5bdcdf829f53f2685fc217c3b21d1d97d329b346413f8050"
	Nov 24 03:12:26 default-k8s-diff-port-993813 kubelet[723]: E1124 03:12:26.969002     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z8ltc_kubernetes-dashboard(7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc" podUID="7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6"
	Nov 24 03:12:27 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:27.972507     723 scope.go:117] "RemoveContainer" containerID="f8ef1f8a04fdcf7d5bdcdf829f53f2685fc217c3b21d1d97d329b346413f8050"
	Nov 24 03:12:27 default-k8s-diff-port-993813 kubelet[723]: E1124 03:12:27.972706     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z8ltc_kubernetes-dashboard(7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc" podUID="7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6"
	Nov 24 03:12:31 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:31.118425     723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6tmlg" podStartSLOduration=2.129583617 podStartE2EDuration="9.11840663s" podCreationTimestamp="2025-11-24 03:12:22 +0000 UTC" firstStartedPulling="2025-11-24 03:12:22.3577483 +0000 UTC m=+8.641628160" lastFinishedPulling="2025-11-24 03:12:29.34657131 +0000 UTC m=+15.630451173" observedRunningTime="2025-11-24 03:12:29.996959599 +0000 UTC m=+16.280839471" watchObservedRunningTime="2025-11-24 03:12:31.11840663 +0000 UTC m=+17.402286500"
	Nov 24 03:12:39 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:39.884603     723 scope.go:117] "RemoveContainer" containerID="f8ef1f8a04fdcf7d5bdcdf829f53f2685fc217c3b21d1d97d329b346413f8050"
	Nov 24 03:12:40 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:40.009770     723 scope.go:117] "RemoveContainer" containerID="f8ef1f8a04fdcf7d5bdcdf829f53f2685fc217c3b21d1d97d329b346413f8050"
	Nov 24 03:12:40 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:40.010035     723 scope.go:117] "RemoveContainer" containerID="2a53443a2e458d26b2cc71c84ea3aa455507f32127c9396c9cc57b3c3f221fff"
	Nov 24 03:12:40 default-k8s-diff-port-993813 kubelet[723]: E1124 03:12:40.010267     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z8ltc_kubernetes-dashboard(7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc" podUID="7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6"
	Nov 24 03:12:47 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:47.161181     723 scope.go:117] "RemoveContainer" containerID="2a53443a2e458d26b2cc71c84ea3aa455507f32127c9396c9cc57b3c3f221fff"
	Nov 24 03:12:47 default-k8s-diff-port-993813 kubelet[723]: E1124 03:12:47.161345     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z8ltc_kubernetes-dashboard(7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc" podUID="7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6"
	Nov 24 03:12:50 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:50.036817     723 scope.go:117] "RemoveContainer" containerID="1bd7fbd7ac7308bdb9bfcef37d44d50f647796051adcb416cecd8027eff0b98e"
	Nov 24 03:13:00 default-k8s-diff-port-993813 kubelet[723]: I1124 03:13:00.884491     723 scope.go:117] "RemoveContainer" containerID="2a53443a2e458d26b2cc71c84ea3aa455507f32127c9396c9cc57b3c3f221fff"
	Nov 24 03:13:01 default-k8s-diff-port-993813 kubelet[723]: I1124 03:13:01.067565     723 scope.go:117] "RemoveContainer" containerID="2a53443a2e458d26b2cc71c84ea3aa455507f32127c9396c9cc57b3c3f221fff"
	Nov 24 03:13:01 default-k8s-diff-port-993813 kubelet[723]: I1124 03:13:01.067808     723 scope.go:117] "RemoveContainer" containerID="ca56ee1046dfd67567192ecb6131590bdde60902b351bebce38f90604b67c2ba"
	Nov 24 03:13:01 default-k8s-diff-port-993813 kubelet[723]: E1124 03:13:01.068137     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z8ltc_kubernetes-dashboard(7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc" podUID="7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6"
	Nov 24 03:13:06 default-k8s-diff-port-993813 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 03:13:06 default-k8s-diff-port-993813 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 03:13:06 default-k8s-diff-port-993813 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 03:13:06 default-k8s-diff-port-993813 systemd[1]: kubelet.service: Consumed 1.621s CPU time.
	
	
	==> kubernetes-dashboard [93cf9607a612fd45cf69895841118ca18e88cd31bd1ae578c8b2d22db2c14cad] <==
	2025/11/24 03:12:29 Starting overwatch
	2025/11/24 03:12:29 Using namespace: kubernetes-dashboard
	2025/11/24 03:12:29 Using in-cluster config to connect to apiserver
	2025/11/24 03:12:29 Using secret token for csrf signing
	2025/11/24 03:12:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 03:12:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 03:12:29 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 03:12:29 Generating JWE encryption key
	2025/11/24 03:12:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 03:12:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 03:12:29 Initializing JWE encryption key from synchronized object
	2025/11/24 03:12:29 Creating in-cluster Sidecar client
	2025/11/24 03:12:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 03:12:29 Serving insecurely on HTTP port: 9090
	2025/11/24 03:12:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1bd7fbd7ac7308bdb9bfcef37d44d50f647796051adcb416cecd8027eff0b98e] <==
	I1124 03:12:19.278026       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 03:12:49.282494       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [573f6a7cb37360f3f1fa23cf31377d96a60d7bd6c0e83b06a385f1fcb540e103] <==
	I1124 03:12:50.089868       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:12:50.097398       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:12:50.097458       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:12:50.099333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:53.554672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:57.815397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:01.414061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:04.468069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:07.489986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:07.495821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:13:07.495992       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:13:07.496147       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-993813_f542bdb1-ded6-45e0-9622-2372d8336bb7!
	I1124 03:13:07.496153       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3f822d05-a76f-4ae4-9301-4b0cf90b6f0e", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-993813_f542bdb1-ded6-45e0-9622-2372d8336bb7 became leader
	W1124 03:13:07.498653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:07.502141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:13:07.596377       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-993813_f542bdb1-ded6-45e0-9622-2372d8336bb7!
	W1124 03:13:09.505341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:09.509108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993813 -n default-k8s-diff-port-993813
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993813 -n default-k8s-diff-port-993813: exit status 2 (338.102644ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-993813 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-993813
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-993813:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8",
	        "Created": "2025-11-24T03:10:55.916288058Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 656843,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:12:05.040714034Z",
	            "FinishedAt": "2025-11-24T03:12:04.193532321Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8/hostname",
	        "HostsPath": "/var/lib/docker/containers/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8/hosts",
	        "LogPath": "/var/lib/docker/containers/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8/b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8-json.log",
	        "Name": "/default-k8s-diff-port-993813",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-993813:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-993813",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b38aecdd5f9d7d0755520db1b10b5fb2873cd3983375ae02f886d1628c6a05c8",
	                "LowerDir": "/var/lib/docker/overlay2/6fe49d723a85944578f35e13638f43b6277cc82d6ac33569536577f7d90c4edd-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6fe49d723a85944578f35e13638f43b6277cc82d6ac33569536577f7d90c4edd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6fe49d723a85944578f35e13638f43b6277cc82d6ac33569536577f7d90c4edd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6fe49d723a85944578f35e13638f43b6277cc82d6ac33569536577f7d90c4edd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-993813",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-993813/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-993813",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-993813",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-993813",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "07e27320d0bcf192a08231c130bc772c75cc476c063f5b8b8867087b38a27191",
	            "SandboxKey": "/var/run/docker/netns/07e27320d0bc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33488"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33489"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33492"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33490"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33491"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-993813": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "50b2e4e61586f7fb59c4f56c2607ad50e6dc9faf4b2e274df27c397b878fe391",
	                    "EndpointID": "8bec0b259cf3bdbfbcf94795f0c484c0f8c8b83f2d759caefe6aa476c44ed74b",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "fa:4b:98:39:1c:ec",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-993813",
	                        "b38aecdd5f9d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993813 -n default-k8s-diff-port-993813
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993813 -n default-k8s-diff-port-993813: exit status 2 (332.549869ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-993813 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-993813 logs -n 25: (1.119887255s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-603010 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-579951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p old-k8s-version-579951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-438041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ image   │ newest-cni-438041 image list --format=json                                                                                                                                                                                                    │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p newest-cni-438041 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p no-preload-603010 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                                                                                          │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p no-preload-603010 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                                                                                          │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p disable-driver-mounts-242597                                                                                                                                                                                                               │ disable-driver-mounts-242597 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ image   │ old-k8s-version-579951 image list --format=json                                                                                                                                                                                               │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p old-k8s-version-579951 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ -p old-k8s-version-579951                                                                                                                                                                                                                     │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p old-k8s-version-579951                                                                                                                                                                                                                     │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable metrics-server -p embed-certs-284604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ stop    │ -p embed-certs-284604 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ image   │ default-k8s-diff-port-993813 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ pause   │ -p default-k8s-diff-port-993813 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ image   │ no-preload-603010 image list --format=json                                                                                                                                                                                                    │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ pause   │ -p no-preload-603010 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:12:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:12:09.055015  658811 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:12:09.055230  658811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:09.055247  658811 out.go:374] Setting ErrFile to fd 2...
	I1124 03:12:09.055253  658811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:09.055468  658811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:12:09.055909  658811 out.go:368] Setting JSON to false
	I1124 03:12:09.056956  658811 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6876,"bootTime":1763947053,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:12:09.057009  658811 start.go:143] virtualization: kvm guest
	I1124 03:12:09.058671  658811 out.go:179] * [embed-certs-284604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:12:09.059850  658811 notify.go:221] Checking for updates...
	I1124 03:12:09.059855  658811 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:12:09.061128  658811 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:12:09.062317  658811 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:09.063358  658811 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:12:09.064255  658811 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:12:09.065078  658811 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:12:09.066407  658811 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.066509  658811 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.066589  658811 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:12:09.066666  658811 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:12:09.089713  658811 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:12:09.089855  658811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:09.145948  658811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:12:09.135562124 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:09.146071  658811 docker.go:319] overlay module found
	I1124 03:12:09.147708  658811 out.go:179] * Using the docker driver based on user configuration
	I1124 03:12:09.148714  658811 start.go:309] selected driver: docker
	I1124 03:12:09.148737  658811 start.go:927] validating driver "docker" against <nil>
	I1124 03:12:09.148747  658811 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:12:09.149338  658811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:09.210343  658811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:12:09.200351707 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:09.210534  658811 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:12:09.210794  658811 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:09.212381  658811 out.go:179] * Using Docker driver with root privileges
	I1124 03:12:09.213398  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:09.213482  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:09.213497  658811 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:12:09.213574  658811 start.go:353] cluster config:
	{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:09.214730  658811 out.go:179] * Starting "embed-certs-284604" primary control-plane node in "embed-certs-284604" cluster
	I1124 03:12:09.215613  658811 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:12:09.216663  658811 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:12:09.217654  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:09.217694  658811 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:12:09.217703  658811 cache.go:65] Caching tarball of preloaded images
	I1124 03:12:09.217732  658811 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:12:09.217791  658811 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:12:09.217808  658811 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:12:09.217977  658811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:12:09.218021  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json: {Name:mkd4898576ebe0ebf6d2ca35fddd33eac8f127df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:09.239944  658811 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:12:09.239962  658811 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:12:09.239976  658811 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:12:09.240004  658811 start.go:360] acquireMachinesLock for embed-certs-284604: {Name:mkd39be5908e1d289ed5af40b6c2b1c510beffd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:12:09.240088  658811 start.go:364] duration metric: took 68.665µs to acquireMachinesLock for "embed-certs-284604"
	I1124 03:12:09.240109  658811 start.go:93] Provisioning new machine with config: &{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:09.240182  658811 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:12:05.014758  656542 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-993813" ...
	I1124 03:12:05.014805  656542 cli_runner.go:164] Run: docker start default-k8s-diff-port-993813
	I1124 03:12:05.297424  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:05.316835  656542 kic.go:430] container "default-k8s-diff-port-993813" state is running.
	I1124 03:12:05.317309  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:05.336690  656542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/config.json ...
	I1124 03:12:05.336923  656542 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:05.336992  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:05.356564  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:05.356863  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:05.356907  656542 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:05.357642  656542 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39256->127.0.0.1:33488: read: connection reset by peer
	I1124 03:12:08.497704  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:12:08.497744  656542 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-993813"
	I1124 03:12:08.497799  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:08.516284  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:08.516620  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:08.516642  656542 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993813 && echo "default-k8s-diff-port-993813" | sudo tee /etc/hostname
	I1124 03:12:08.664299  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:12:08.664399  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:08.683215  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:08.683424  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:08.683440  656542 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993813/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:08.824495  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:08.824534  656542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:08.824571  656542 ubuntu.go:190] setting up certificates
	I1124 03:12:08.824597  656542 provision.go:84] configureAuth start
	I1124 03:12:08.824659  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:08.842592  656542 provision.go:143] copyHostCerts
	I1124 03:12:08.842639  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:08.842651  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:08.842701  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:08.842805  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:08.842813  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:08.842838  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:08.842940  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:08.842950  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:08.842981  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:08.843051  656542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993813 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-993813 localhost minikube]
	I1124 03:12:08.993088  656542 provision.go:177] copyRemoteCerts
	I1124 03:12:08.993141  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:08.993180  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.010481  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.112610  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:09.134182  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 03:12:09.153393  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:12:09.173516  656542 provision.go:87] duration metric: took 348.902104ms to configureAuth
	I1124 03:12:09.173547  656542 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:09.173717  656542 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.173820  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.195519  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:09.195738  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:09.195756  656542 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:09.551404  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:09.551434  656542 machine.go:97] duration metric: took 4.214494542s to provisionDockerMachine
	I1124 03:12:09.551449  656542 start.go:293] postStartSetup for "default-k8s-diff-port-993813" (driver="docker")
	I1124 03:12:09.551463  656542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:09.551533  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:09.551574  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.572440  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.684044  656542 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:09.688328  656542 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:09.688354  656542 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:09.688365  656542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:09.688414  656542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:09.688488  656542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:09.688660  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:09.696023  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:09.725715  656542 start.go:296] duration metric: took 174.248037ms for postStartSetup
	I1124 03:12:09.725795  656542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:09.725851  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.747235  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:06.610202  657716 out.go:252] * Restarting existing docker container for "no-preload-603010" ...
	I1124 03:12:06.610267  657716 cli_runner.go:164] Run: docker start no-preload-603010
	I1124 03:12:06.895418  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:06.913279  657716 kic.go:430] container "no-preload-603010" state is running.
	I1124 03:12:06.913694  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:06.931543  657716 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/config.json ...
	I1124 03:12:06.931779  657716 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:06.931840  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:06.949180  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:06.949422  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:06.949436  657716 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:06.950106  657716 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53738->127.0.0.1:33493: read: connection reset by peer
	I1124 03:12:10.094410  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-603010
	
	I1124 03:12:10.094455  657716 ubuntu.go:182] provisioning hostname "no-preload-603010"
	I1124 03:12:10.094548  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.117277  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.117614  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.117637  657716 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-603010 && echo "no-preload-603010" | sudo tee /etc/hostname
	I1124 03:12:10.272082  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-603010
	
	I1124 03:12:10.272162  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.293197  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.293525  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.293557  657716 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-603010' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-603010/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-603010' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:10.440289  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:10.440322  657716 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:10.440350  657716 ubuntu.go:190] setting up certificates
	I1124 03:12:10.440374  657716 provision.go:84] configureAuth start
	I1124 03:12:10.440443  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:10.458672  657716 provision.go:143] copyHostCerts
	I1124 03:12:10.458743  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:10.458766  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:10.458857  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:10.459021  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:10.459037  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:10.459080  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:10.459183  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:10.459195  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:10.459232  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:10.459323  657716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.no-preload-603010 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-603010]
	I1124 03:12:10.546420  657716 provision.go:177] copyRemoteCerts
	I1124 03:12:10.546503  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:10.546552  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.564799  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:10.669343  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:10.687953  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:12:10.707320  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:12:10.728398  657716 provision.go:87] duration metric: took 288.002675ms to configureAuth
	I1124 03:12:10.728450  657716 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:10.728791  657716 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:10.728992  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.754544  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.754857  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.754907  657716 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:09.846210  656542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:09.851045  656542 fix.go:56] duration metric: took 4.853815531s for fixHost
	I1124 03:12:09.851067  656542 start.go:83] releasing machines lock for "default-k8s-diff-port-993813", held for 4.853861223s
	I1124 03:12:09.851139  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:09.871679  656542 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:09.871744  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.871767  656542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:09.871859  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.897665  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.897832  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.996390  656542 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:10.070447  656542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:10.108350  656542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:10.113659  656542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:10.113732  656542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:10.122258  656542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:12:10.122274  656542 start.go:496] detecting cgroup driver to use...
	I1124 03:12:10.122301  656542 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:10.122333  656542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:10.138420  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:10.151623  656542 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:10.151696  656542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:10.169717  656542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:10.185403  656542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:10.268937  656542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:10.361626  656542 docker.go:234] disabling docker service ...
	I1124 03:12:10.361713  656542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:10.376259  656542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:10.389709  656542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:10.493317  656542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:10.581163  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:10.594309  656542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:10.608489  656542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:10.608559  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.618090  656542 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:10.618147  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.629142  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.639755  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.648289  656542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:10.657390  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.667835  656542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.677148  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.686554  656542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:10.694262  656542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:10.701983  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:10.784645  656542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:13.176259  656542 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.391580237s)
	I1124 03:12:13.176297  656542 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:13.176344  656542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:13.182771  656542 start.go:564] Will wait 60s for crictl version
	I1124 03:12:13.182920  656542 ssh_runner.go:195] Run: which crictl
	I1124 03:12:13.188282  656542 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:13.221129  656542 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:13.221208  656542 ssh_runner.go:195] Run: crio --version
	I1124 03:12:13.256022  656542 ssh_runner.go:195] Run: crio --version
	I1124 03:12:13.289098  656542 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1124 03:12:09.667322  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:11.810684  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:09.241811  658811 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:12:09.242074  658811 start.go:159] libmachine.API.Create for "embed-certs-284604" (driver="docker")
	I1124 03:12:09.242107  658811 client.go:173] LocalClient.Create starting
	I1124 03:12:09.242186  658811 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 03:12:09.242224  658811 main.go:143] libmachine: Decoding PEM data...
	I1124 03:12:09.242246  658811 main.go:143] libmachine: Parsing certificate...
	I1124 03:12:09.242326  658811 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 03:12:09.242354  658811 main.go:143] libmachine: Decoding PEM data...
	I1124 03:12:09.242374  658811 main.go:143] libmachine: Parsing certificate...
	I1124 03:12:09.242824  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:12:09.259427  658811 cli_runner.go:211] docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:12:09.259477  658811 network_create.go:284] running [docker network inspect embed-certs-284604] to gather additional debugging logs...
	I1124 03:12:09.259492  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604
	W1124 03:12:09.275004  658811 cli_runner.go:211] docker network inspect embed-certs-284604 returned with exit code 1
	I1124 03:12:09.275029  658811 network_create.go:287] error running [docker network inspect embed-certs-284604]: docker network inspect embed-certs-284604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-284604 not found
	I1124 03:12:09.275039  658811 network_create.go:289] output of [docker network inspect embed-certs-284604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-284604 not found
	
	** /stderr **
	I1124 03:12:09.275132  658811 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:09.292074  658811 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-58688175ab6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:9d:6a:fc:5c:13} reservation:<nil>}
	I1124 03:12:09.292745  658811 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf033568456f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:35:42:16:23:28} reservation:<nil>}
	I1124 03:12:09.293207  658811 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ecb12099844 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ce:07:19:c6:91:6e} reservation:<nil>}
	I1124 03:12:09.293801  658811 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-50b2e4e61586 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:0d:88:19:7d:df} reservation:<nil>}
	I1124 03:12:09.294406  658811 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6fb41680caed IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:a0:6c:44:95:b2} reservation:<nil>}
	I1124 03:12:09.295273  658811 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eef7f0}
	I1124 03:12:09.295296  658811 network_create.go:124] attempt to create docker network embed-certs-284604 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 03:12:09.295333  658811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-284604 embed-certs-284604
	I1124 03:12:09.341016  658811 network_create.go:108] docker network embed-certs-284604 192.168.94.0/24 created
	I1124 03:12:09.341044  658811 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-284604" container
	I1124 03:12:09.341097  658811 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:12:09.358710  658811 cli_runner.go:164] Run: docker volume create embed-certs-284604 --label name.minikube.sigs.k8s.io=embed-certs-284604 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:12:09.377491  658811 oci.go:103] Successfully created a docker volume embed-certs-284604
	I1124 03:12:09.377565  658811 cli_runner.go:164] Run: docker run --rm --name embed-certs-284604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-284604 --entrypoint /usr/bin/test -v embed-certs-284604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:12:09.757637  658811 oci.go:107] Successfully prepared a docker volume embed-certs-284604
	I1124 03:12:09.757726  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:09.757742  658811 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:12:09.757816  658811 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-284604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:12:13.055592  658811 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-284604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (3.297719307s)
	I1124 03:12:13.055632  658811 kic.go:203] duration metric: took 3.29788472s to extract preloaded images to volume ...
	W1124 03:12:13.055721  658811 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:12:13.055758  658811 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:12:13.055810  658811 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:12:13.124836  658811 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-284604 --name embed-certs-284604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-284604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-284604 --network embed-certs-284604 --ip 192.168.94.2 --volume embed-certs-284604:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:12:13.468642  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Running}}
	I1124 03:12:13.493010  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.520114  658811 cli_runner.go:164] Run: docker exec embed-certs-284604 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:12:13.579438  658811 oci.go:144] the created container "embed-certs-284604" has a running status.
	I1124 03:12:13.579473  658811 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa...
	I1124 03:12:13.686392  658811 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:12:13.719014  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.744934  658811 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:12:13.744979  658811 kic_runner.go:114] Args: [docker exec --privileged embed-certs-284604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:12:13.804379  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.833184  658811 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:13.833391  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:13.865266  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:13.865635  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:13.865670  658811 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:13.866448  658811 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55158->127.0.0.1:33498: read: connection reset by peer
	I1124 03:12:13.290552  656542 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:13.314170  656542 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:13.318716  656542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:13.333300  656542 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:13.333436  656542 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:13.333523  656542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:13.375001  656542 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:13.375027  656542 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:12:13.375078  656542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:13.407152  656542 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:13.407180  656542 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:13.407190  656542 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 03:12:13.407342  656542 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:13.407444  656542 ssh_runner.go:195] Run: crio config
	I1124 03:12:13.468159  656542 cni.go:84] Creating CNI manager for ""
	I1124 03:12:13.468191  656542 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:13.468220  656542 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:13.468251  656542 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993813 NodeName:default-k8s-diff-port-993813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:13.468425  656542 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993813"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:13.468485  656542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:13.480922  656542 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:13.480989  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:13.491437  656542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 03:12:13.510538  656542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:13.531599  656542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 03:12:13.550625  656542 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:13.557123  656542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:13.570105  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:13.687069  656542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:13.711246  656542 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813 for IP: 192.168.76.2
	I1124 03:12:13.711268  656542 certs.go:195] generating shared ca certs ...
	I1124 03:12:13.711287  656542 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:13.711456  656542 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:13.711513  656542 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:13.711526  656542 certs.go:257] generating profile certs ...
	I1124 03:12:13.711642  656542 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key
	I1124 03:12:13.711706  656542 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619
	I1124 03:12:13.711753  656542 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key
	I1124 03:12:13.711996  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:13.712051  656542 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:13.712065  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:13.712101  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:13.712139  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:13.712175  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:13.712240  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:13.712851  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:13.744604  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:13.773924  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:13.797454  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:13.831783  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 03:12:13.870484  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:13.900124  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:13.922822  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:12:13.948171  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:13.977351  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:14.003032  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:14.029032  656542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:14.044929  656542 ssh_runner.go:195] Run: openssl version
	I1124 03:12:14.055102  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:14.069569  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.074149  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.074206  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.129455  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:14.139467  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:14.150460  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.155547  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.155598  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.213122  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:14.224488  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:14.235043  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.239741  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.239796  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.296275  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:14.307247  656542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:14.315784  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:12:14.374911  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:12:14.452037  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:12:14.514532  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:12:14.577046  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:12:14.634822  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:12:14.697600  656542 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:14.697704  656542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:14.697759  656542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:14.736428  656542 cri.go:89] found id: "9d08a55f25f2dea3825f91aa9365cd9a3093b4582eda50f03b22c2ecc00f92c6"
	I1124 03:12:14.736451  656542 cri.go:89] found id: "a7d5f73dd018d0ee98a3ef524ea2c83739dc8c07b34a7298ffb1d288db659329"
	I1124 03:12:14.736458  656542 cri.go:89] found id: "dd990c6cdcef7e1e7305b9fc20b7615dfb761cbe8c5d42a1f61c8b41406cd0a7"
	I1124 03:12:14.736462  656542 cri.go:89] found id: "11357ba44da7473554ad2b8f1e58b742de06b9155b164b6c83a5d2f9beb7830e"
	I1124 03:12:14.736466  656542 cri.go:89] found id: ""
	I1124 03:12:14.736511  656542 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:12:14.754070  656542 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:14Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:14.754156  656542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:14.765200  656542 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:12:14.765224  656542 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:12:14.765273  656542 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:12:14.773243  656542 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:12:14.773947  656542 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993813" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:14.774328  656542 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993813" cluster setting kubeconfig missing "default-k8s-diff-port-993813" context setting]
	I1124 03:12:14.774925  656542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.776519  656542 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:12:14.785657  656542 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 03:12:14.785687  656542 kubeadm.go:602] duration metric: took 20.455875ms to restartPrimaryControlPlane
	I1124 03:12:14.785704  656542 kubeadm.go:403] duration metric: took 88.114399ms to StartCluster
	I1124 03:12:14.785722  656542 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.785796  656542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:14.786941  656542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.787180  656542 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:14.787429  656542 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:14.787487  656542 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:14.787568  656542 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.787584  656542 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.787592  656542 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:12:14.787615  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.788183  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.788464  656542 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.788516  656542 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993813"
	I1124 03:12:14.788466  656542 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.788738  656542 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.788750  656542 addons.go:248] addon dashboard should already be in state true
	I1124 03:12:14.788782  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.789431  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.789731  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.792034  656542 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:14.793166  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:14.820828  656542 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:12:14.821632  656542 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.821655  656542 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:12:14.821731  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.821909  656542 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:12:14.822084  656542 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:14.822112  656542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:14.822188  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.822548  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.827335  656542 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:12:13.173638  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:13.173665  657716 machine.go:97] duration metric: took 6.241868553s to provisionDockerMachine
	I1124 03:12:13.173679  657716 start.go:293] postStartSetup for "no-preload-603010" (driver="docker")
	I1124 03:12:13.173692  657716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:13.173754  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:13.173803  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.199819  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.311414  657716 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:13.316263  657716 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:13.316292  657716 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:13.316304  657716 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:13.316362  657716 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:13.316451  657716 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:13.316564  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:13.330333  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:13.349678  657716 start.go:296] duration metric: took 175.98281ms for postStartSetup
	I1124 03:12:13.349757  657716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:13.349803  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.372668  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.477580  657716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:13.483572  657716 fix.go:56] duration metric: took 6.891356705s for fixHost
	I1124 03:12:13.483602  657716 start.go:83] releasing machines lock for "no-preload-603010", held for 6.891418388s
	I1124 03:12:13.483679  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:13.509057  657716 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:13.509123  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.509169  657716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:13.509281  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.533830  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.535423  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.716640  657716 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:13.727633  657716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:13.784701  657716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:13.789877  657716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:13.789964  657716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:13.799956  657716 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:12:13.799989  657716 start.go:496] detecting cgroup driver to use...
	I1124 03:12:13.800021  657716 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:13.800080  657716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:13.821650  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:13.845364  657716 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:13.845437  657716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:13.876223  657716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:13.896810  657716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:14.018144  657716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:14.133192  657716 docker.go:234] disabling docker service ...
	I1124 03:12:14.133276  657716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:14.151812  657716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:14.167561  657716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:14.282838  657716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:14.401610  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:14.417930  657716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:14.437107  657716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:14.437170  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.449631  657716 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:14.449698  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.462463  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.477641  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.490417  657716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:14.504273  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.516484  657716 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.526509  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.538280  657716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:14.546998  657716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:14.555574  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:14.685636  657716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:14.944749  657716 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:14.944917  657716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:14.950036  657716 start.go:564] Will wait 60s for crictl version
	I1124 03:12:14.950115  657716 ssh_runner.go:195] Run: which crictl
	I1124 03:12:14.954328  657716 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:14.985292  657716 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:14.985374  657716 ssh_runner.go:195] Run: crio --version
	I1124 03:12:15.030503  657716 ssh_runner.go:195] Run: crio --version
	I1124 03:12:15.075694  657716 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:12:15.076822  657716 cli_runner.go:164] Run: docker network inspect no-preload-603010 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:15.102488  657716 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:15.108702  657716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:15.124431  657716 kubeadm.go:884] updating cluster {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:15.124588  657716 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:15.124636  657716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:15.167486  657716 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:15.167521  657716 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:15.167539  657716 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 03:12:15.167821  657716 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-603010 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:15.167925  657716 ssh_runner.go:195] Run: crio config
	I1124 03:12:15.235069  657716 cni.go:84] Creating CNI manager for ""
	I1124 03:12:15.235092  657716 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:15.235110  657716 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:15.235137  657716 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603010 NodeName:no-preload-603010 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:15.235315  657716 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603010"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:15.235402  657716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:15.246426  657716 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:15.246486  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:15.255073  657716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:12:15.274174  657716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:15.291964  657716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1124 03:12:15.310704  657716 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:15.315241  657716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:15.329049  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:15.444004  657716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:15.468249  657716 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010 for IP: 192.168.85.2
	I1124 03:12:15.468275  657716 certs.go:195] generating shared ca certs ...
	I1124 03:12:15.468303  657716 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:15.468461  657716 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:15.468527  657716 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:15.468545  657716 certs.go:257] generating profile certs ...
	I1124 03:12:15.468671  657716 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key
	I1124 03:12:15.468756  657716 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738
	I1124 03:12:15.468820  657716 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key
	I1124 03:12:15.469056  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:15.469155  657716 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:15.469190  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:15.469235  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:15.469307  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:15.469360  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:15.469452  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:15.470423  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:15.492954  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:15.516840  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:15.539720  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:15.572434  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:12:15.602383  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:15.627969  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:15.650700  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:12:15.671263  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:15.692710  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:15.715510  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:15.740163  657716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:15.756242  657716 ssh_runner.go:195] Run: openssl version
	I1124 03:12:15.764455  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:15.774930  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.779615  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.779675  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.837760  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:15.848860  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:15.859402  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.864242  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.864304  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.923088  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:15.933908  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:15.944242  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:15.949198  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:15.949248  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:16.007273  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:16.018117  657716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:16.023108  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:12:16.086212  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:12:16.144287  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:12:16.203439  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:12:16.267980  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:12:16.329154  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:12:16.391972  657716 kubeadm.go:401] StartCluster: {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:16.392083  657716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:16.392153  657716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:16.431895  657716 cri.go:89] found id: "3dc1c0625a30c329549c40b7e0c43d1ab4d818ccc0fb85ea668066af23a327d3"
	I1124 03:12:16.431924  657716 cri.go:89] found id: "4e8e84f339bed2139577a34b2e6715ab1a8d9a3b425a16886245d61781115cf0"
	I1124 03:12:16.431930  657716 cri.go:89] found id: "7294ac6825ca8afcd936803374aa461bcb4d637ea8743600784037c5acb225b2"
	I1124 03:12:16.431934  657716 cri.go:89] found id: "767a9908e75937581bb6f9fd527e760a401658cde3d3cf28bc2b66613eab1027"
	I1124 03:12:16.431938  657716 cri.go:89] found id: ""
	I1124 03:12:16.431989  657716 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:12:16.448469  657716 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:16Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:16.448636  657716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:16.460046  657716 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:12:16.460066  657716 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:12:16.460159  657716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:12:16.470578  657716 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:12:16.472039  657716 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-603010" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:16.472691  657716 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-603010" cluster setting kubeconfig missing "no-preload-603010" context setting]
	I1124 03:12:16.473827  657716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.476388  657716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:12:16.491280  657716 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 03:12:16.491307  657716 kubeadm.go:602] duration metric: took 31.234841ms to restartPrimaryControlPlane
	I1124 03:12:16.491317  657716 kubeadm.go:403] duration metric: took 99.357197ms to StartCluster
	I1124 03:12:16.491333  657716 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.491393  657716 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:16.492731  657716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.492990  657716 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:16.493291  657716 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:16.493352  657716 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:16.493441  657716 addons.go:70] Setting storage-provisioner=true in profile "no-preload-603010"
	I1124 03:12:16.493465  657716 addons.go:239] Setting addon storage-provisioner=true in "no-preload-603010"
	W1124 03:12:16.493473  657716 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:12:16.493503  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.494027  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.494266  657716 addons.go:70] Setting dashboard=true in profile "no-preload-603010"
	I1124 03:12:16.494322  657716 addons.go:239] Setting addon dashboard=true in "no-preload-603010"
	I1124 03:12:16.494338  657716 addons.go:70] Setting default-storageclass=true in profile "no-preload-603010"
	I1124 03:12:16.494434  657716 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603010"
	W1124 03:12:16.494361  657716 addons.go:248] addon dashboard should already be in state true
	I1124 03:12:16.494570  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.494863  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.495005  657716 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:16.495647  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.496468  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:16.527269  657716 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:12:16.528480  657716 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:12:16.528517  657716 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1124 03:12:14.168310  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:16.172923  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:18.176795  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:14.828319  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:12:14.828372  656542 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:12:14.828432  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.858092  656542 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:14.858118  656542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:14.858192  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.865650  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.866433  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.895242  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.975501  656542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:14.992389  656542 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:12:15.008151  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:15.016186  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:12:15.016211  656542 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:12:15.031574  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:15.042522  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:12:15.042540  656542 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:12:15.074331  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:12:15.074365  656542 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:12:15.109090  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:12:15.109113  656542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:12:15.128161  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:12:15.128184  656542 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:12:15.147874  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:12:15.147903  656542 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:12:15.168191  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:12:15.168211  656542 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:12:15.185637  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:12:15.185661  656542 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:12:15.202994  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:15.203016  656542 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:12:15.221608  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:17.996962  656542 node_ready.go:49] node "default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:17.997067  656542 node_ready.go:38] duration metric: took 3.004589581s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:12:17.997096  656542 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:17.997184  656542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:18.834613  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.826385361s)
	I1124 03:12:18.834690  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.803092411s)
	I1124 03:12:18.834853  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.613213665s)
	I1124 03:12:18.834988  656542 api_server.go:72] duration metric: took 4.047778988s to wait for apiserver process to appear ...
	I1124 03:12:18.835771  656542 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:18.835800  656542 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:12:18.838614  656542 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993813 addons enable metrics-server
	
	I1124 03:12:18.844882  656542 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 03:12:17.043130  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:12:17.043165  658811 ubuntu.go:182] provisioning hostname "embed-certs-284604"
	I1124 03:12:17.043247  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.069679  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:17.070109  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:17.070142  658811 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-284604 && echo "embed-certs-284604" | sudo tee /etc/hostname
	I1124 03:12:17.259114  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:12:17.259199  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.284082  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:17.284399  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:17.284433  658811 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-284604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-284604/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-284604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:17.452374  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:17.452411  658811 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:17.452438  658811 ubuntu.go:190] setting up certificates
	I1124 03:12:17.452452  658811 provision.go:84] configureAuth start
	I1124 03:12:17.452521  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:17.483434  658811 provision.go:143] copyHostCerts
	I1124 03:12:17.483502  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:17.483519  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:17.483580  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:17.483712  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:17.483725  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:17.483764  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:17.483851  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:17.483858  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:17.483909  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:17.483990  658811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.embed-certs-284604 san=[127.0.0.1 192.168.94.2 embed-certs-284604 localhost minikube]
	I1124 03:12:17.911206  658811 provision.go:177] copyRemoteCerts
	I1124 03:12:17.911335  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:17.911394  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.943914  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.069938  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:18.098447  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:12:18.124997  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:12:18.162531  658811 provision.go:87] duration metric: took 710.055135ms to configureAuth
	I1124 03:12:18.162560  658811 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:18.162764  658811 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:18.162877  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.187248  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:18.187553  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:18.187575  658811 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:18.557227  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:18.557257  658811 machine.go:97] duration metric: took 4.723983027s to provisionDockerMachine
	I1124 03:12:18.557270  658811 client.go:176] duration metric: took 9.315155053s to LocalClient.Create
	I1124 03:12:18.557286  658811 start.go:167] duration metric: took 9.315214435s to libmachine.API.Create "embed-certs-284604"
	I1124 03:12:18.557298  658811 start.go:293] postStartSetup for "embed-certs-284604" (driver="docker")
	I1124 03:12:18.557310  658811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:18.557379  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:18.557432  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.587404  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.715877  658811 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:18.721275  658811 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:18.721309  658811 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:18.721322  658811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:18.721381  658811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:18.721473  658811 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:18.721597  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:18.732645  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:18.763370  658811 start.go:296] duration metric: took 206.056597ms for postStartSetup
	I1124 03:12:18.763732  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:18.791899  658811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:12:18.792183  658811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:18.792233  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.820806  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.936530  658811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:18.948570  658811 start.go:128] duration metric: took 9.708372989s to createHost
	I1124 03:12:18.948686  658811 start.go:83] releasing machines lock for "embed-certs-284604", held for 9.708587492s
	I1124 03:12:18.948771  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:18.973190  658811 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:18.973375  658811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:18.973512  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.973582  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.998620  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.999698  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.845938  656542 addons.go:530] duration metric: took 4.058450553s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 03:12:18.846295  656542 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:18.846717  656542 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:12:19.335969  656542 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:12:19.342155  656542 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1124 03:12:19.343392  656542 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:19.343421  656542 api_server.go:131] duration metric: took 507.639836ms to wait for apiserver health ...
	I1124 03:12:19.343433  656542 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:19.347170  656542 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:19.347220  656542 system_pods.go:61] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:19.347233  656542 system_pods.go:61] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:19.347244  656542 system_pods.go:61] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:19.347253  656542 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:19.347263  656542 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:19.347271  656542 system_pods.go:61] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:19.347279  656542 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:19.347290  656542 system_pods.go:61] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:19.347300  656542 system_pods.go:74] duration metric: took 3.857291ms to wait for pod list to return data ...
	I1124 03:12:19.347309  656542 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:19.350005  656542 default_sa.go:45] found service account: "default"
	I1124 03:12:19.350027  656542 default_sa.go:55] duration metric: took 2.709767ms for default service account to be created ...
	I1124 03:12:19.350036  656542 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:19.354450  656542 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:19.354480  656542 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:19.354492  656542 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:19.354502  656542 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:19.354512  656542 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:19.354525  656542 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:19.354534  656542 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:19.354542  656542 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:19.354550  656542 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:19.354560  656542 system_pods.go:126] duration metric: took 4.516416ms to wait for k8s-apps to be running ...
	I1124 03:12:19.354569  656542 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:19.354617  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:19.377699  656542 system_svc.go:56] duration metric: took 23.119925ms WaitForService to wait for kubelet
	I1124 03:12:19.377726  656542 kubeadm.go:587] duration metric: took 4.590516557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:19.377808  656542 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:19.381785  656542 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:19.381815  656542 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:19.381831  656542 node_conditions.go:105] duration metric: took 4.017737ms to run NodePressure ...
	I1124 03:12:19.381846  656542 start.go:242] waiting for startup goroutines ...
	I1124 03:12:19.381857  656542 start.go:247] waiting for cluster config update ...
	I1124 03:12:19.381883  656542 start.go:256] writing updated cluster config ...
	I1124 03:12:19.382229  656542 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:19.387932  656542 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:19.394333  656542 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:16.529636  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:12:16.529826  657716 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:12:16.529877  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.529719  657716 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:16.530024  657716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:16.530070  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.534729  657716 addons.go:239] Setting addon default-storageclass=true in "no-preload-603010"
	W1124 03:12:16.534754  657716 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:12:16.534783  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.539339  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.565768  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.582397  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.585042  657716 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:16.585070  657716 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:16.585126  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.617946  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.706410  657716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:16.731745  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:12:16.731773  657716 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:12:16.736337  657716 node_ready.go:35] waiting up to 6m0s for node "no-preload-603010" to be "Ready" ...
	I1124 03:12:16.736937  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:16.758823  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:12:16.758847  657716 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:12:16.768684  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:16.788344  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:12:16.788369  657716 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:12:16.806593  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:12:16.806620  657716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:12:16.847576  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:12:16.847609  657716 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:12:16.867721  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:12:16.867755  657716 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:12:16.886765  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:12:16.886787  657716 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:12:16.907569  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:12:16.907732  657716 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:12:16.929396  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:16.929417  657716 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:12:16.958374  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:19.957067  657716 node_ready.go:49] node "no-preload-603010" is "Ready"
	I1124 03:12:19.957111  657716 node_ready.go:38] duration metric: took 3.220732108s for node "no-preload-603010" to be "Ready" ...
	I1124 03:12:19.957131  657716 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:19.957256  657716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:20.880814  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.143842388s)
	I1124 03:12:20.881241  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.112181993s)
	I1124 03:12:21.157660  657716 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.200376454s)
	I1124 03:12:21.157703  657716 api_server.go:72] duration metric: took 4.664681444s to wait for apiserver process to appear ...
	I1124 03:12:21.157713  657716 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:21.157733  657716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:12:21.158403  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.199980339s)
	I1124 03:12:21.160177  657716 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-603010 addons enable metrics-server
	
	I1124 03:12:21.161363  657716 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 03:12:19.120481  658811 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:19.211741  658811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:19.277394  658811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:19.284078  658811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:19.284149  658811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:19.319995  658811 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:12:19.320028  658811 start.go:496] detecting cgroup driver to use...
	I1124 03:12:19.320064  658811 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:19.320117  658811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:19.345823  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:19.367716  658811 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:19.367782  658811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:19.389799  658811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:19.412438  658811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:19.524730  658811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:19.637210  658811 docker.go:234] disabling docker service ...
	I1124 03:12:19.637286  658811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:19.659861  658811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:19.677152  658811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:19.823448  658811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:19.960707  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:19.981616  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:20.012418  658811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:20.012486  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.058077  658811 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:20.058214  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.074742  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.118587  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.135044  658811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:20.151861  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.172656  658811 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.194765  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.232792  658811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:20.242855  658811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:20.253417  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:20.371692  658811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:21.221343  658811 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:21.221440  658811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:21.226905  658811 start.go:564] Will wait 60s for crictl version
	I1124 03:12:21.227016  658811 ssh_runner.go:195] Run: which crictl
	I1124 03:12:21.231693  658811 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:21.262514  658811 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:21.262603  658811 ssh_runner.go:195] Run: crio --version
	I1124 03:12:21.302192  658811 ssh_runner.go:195] Run: crio --version
	I1124 03:12:21.363037  658811 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:12:21.162777  657716 addons.go:530] duration metric: took 4.669427095s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 03:12:21.163688  657716 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:21.163718  657716 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:20.668896  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:23.167980  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:21.364543  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:21.388019  658811 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:21.393290  658811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:21.406629  658811 kubeadm.go:884] updating cluster {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:21.406778  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:21.406846  658811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:21.445258  658811 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:21.445284  658811 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:12:21.445336  658811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:21.471000  658811 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:21.471025  658811 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:21.471037  658811 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:12:21.471125  658811 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-284604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:21.471186  658811 ssh_runner.go:195] Run: crio config
	I1124 03:12:21.516457  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:21.516480  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:21.516502  658811 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:21.516532  658811 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-284604 NodeName:embed-certs-284604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:21.516680  658811 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-284604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:21.516751  658811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:21.524967  658811 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:21.525035  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:21.533487  658811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 03:12:21.547228  658811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:21.640415  658811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 03:12:21.656434  658811 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:21.660696  658811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:21.674410  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:21.772584  658811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:21.798340  658811 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604 for IP: 192.168.94.2
	I1124 03:12:21.798360  658811 certs.go:195] generating shared ca certs ...
	I1124 03:12:21.798381  658811 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.798539  658811 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:21.798593  658811 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:21.798607  658811 certs.go:257] generating profile certs ...
	I1124 03:12:21.798690  658811 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key
	I1124 03:12:21.798708  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt with IP's: []
	I1124 03:12:21.837756  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt ...
	I1124 03:12:21.837790  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt: {Name:mk6d8aec213556beda470e3e5188eed1aec5e183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.838000  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key ...
	I1124 03:12:21.838030  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key: {Name:mk56f44e1d331f82a560e15fe6a3c3ca4602bba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.838172  658811 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087
	I1124 03:12:21.838189  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 03:12:21.915471  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 ...
	I1124 03:12:21.915494  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087: {Name:mk185605a13bb00cdff0decbde0063003287a88f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.915630  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087 ...
	I1124 03:12:21.915643  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087: {Name:mk1404f69a73d575873220c9d20779709c9db66b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.915715  658811 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt
	I1124 03:12:21.915784  658811 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key
	I1124 03:12:21.915837  658811 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key
	I1124 03:12:21.915852  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt with IP's: []
	I1124 03:12:22.064876  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt ...
	I1124 03:12:22.064923  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt: {Name:mk7bbfb718db4eee243d6b6658f5b6db725b34b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:22.065108  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key ...
	I1124 03:12:22.065140  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key: {Name:mk282c31a6bdbd1f185d5fa986bb6679f789f94a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:22.065488  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:22.065564  658811 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:22.065576  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:22.065602  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:22.065630  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:22.065654  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:22.065702  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:22.066383  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:22.086471  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:22.103602  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:22.120085  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:22.137488  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:12:22.154084  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:22.171055  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:22.187877  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:12:22.204407  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:22.222560  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:22.241380  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:22.258066  658811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:22.269950  658811 ssh_runner.go:195] Run: openssl version
	I1124 03:12:22.276120  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:22.283870  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.287375  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.287414  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.321400  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:22.329479  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:22.338113  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.342815  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.342865  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.384524  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:22.393408  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:22.402946  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.406951  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.407009  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.445501  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:22.454521  658811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:22.458152  658811 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:12:22.458212  658811 kubeadm.go:401] StartCluster: {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:22.458278  658811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:22.458330  658811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:22.487574  658811 cri.go:89] found id: ""
	I1124 03:12:22.487653  658811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:22.495876  658811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:12:22.505058  658811 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:12:22.505121  658811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:12:22.515162  658811 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:12:22.515181  658811 kubeadm.go:158] found existing configuration files:
	
	I1124 03:12:22.515229  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:12:22.525864  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:12:22.525956  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:12:22.535632  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:12:22.545975  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:12:22.546068  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:12:22.556144  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:12:22.566062  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:12:22.566123  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:12:22.576364  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:12:22.587041  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:12:22.587089  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:12:22.596656  658811 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:12:22.678370  658811 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:12:22.762592  658811 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1124 03:12:21.400229  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:23.400859  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:21.658606  657716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:12:21.664294  657716 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:12:21.665654  657716 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:21.665685  657716 api_server.go:131] duration metric: took 507.965368ms to wait for apiserver health ...
	I1124 03:12:21.665696  657716 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:21.669523  657716 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:21.669569  657716 system_pods.go:61] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:21.669584  657716 system_pods.go:61] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:21.669600  657716 system_pods.go:61] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:21.669613  657716 system_pods.go:61] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:21.669620  657716 system_pods.go:61] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:21.669631  657716 system_pods.go:61] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:21.669640  657716 system_pods.go:61] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:21.669651  657716 system_pods.go:61] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:21.669661  657716 system_pods.go:74] duration metric: took 3.958242ms to wait for pod list to return data ...
	I1124 03:12:21.669744  657716 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:21.672641  657716 default_sa.go:45] found service account: "default"
	I1124 03:12:21.672665  657716 default_sa.go:55] duration metric: took 2.912794ms for default service account to be created ...
	I1124 03:12:21.672674  657716 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:21.676337  657716 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:21.676367  657716 system_pods.go:89] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:21.676379  657716 system_pods.go:89] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:21.676394  657716 system_pods.go:89] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:21.676403  657716 system_pods.go:89] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:21.676411  657716 system_pods.go:89] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:21.676422  657716 system_pods.go:89] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:21.676433  657716 system_pods.go:89] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:21.676441  657716 system_pods.go:89] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:21.676450  657716 system_pods.go:126] duration metric: took 3.770261ms to wait for k8s-apps to be running ...
	I1124 03:12:21.676459  657716 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:21.676504  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:21.690659  657716 system_svc.go:56] duration metric: took 14.192089ms WaitForService to wait for kubelet
	I1124 03:12:21.690686  657716 kubeadm.go:587] duration metric: took 5.197662584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:21.690707  657716 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:21.693136  657716 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:21.693164  657716 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:21.693184  657716 node_conditions.go:105] duration metric: took 2.469957ms to run NodePressure ...
	I1124 03:12:21.693203  657716 start.go:242] waiting for startup goroutines ...
	I1124 03:12:21.693215  657716 start.go:247] waiting for cluster config update ...
	I1124 03:12:21.693239  657716 start.go:256] writing updated cluster config ...
	I1124 03:12:21.693532  657716 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:21.697901  657716 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:21.701025  657716 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:12:23.706826  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:25.707596  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:25.168947  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:27.669069  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:25.402048  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:27.901054  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:27.707794  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:29.710379  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:29.675678  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:32.166267  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:34.784594  658811 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:12:34.784648  658811 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:12:34.784736  658811 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:12:34.784810  658811 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:12:34.784870  658811 kubeadm.go:319] OS: Linux
	I1124 03:12:34.784983  658811 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:12:34.785059  658811 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:12:34.785107  658811 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:12:34.785166  658811 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:12:34.785237  658811 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:12:34.785303  658811 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:12:34.785372  658811 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:12:34.785441  658811 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:12:34.785518  658811 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:12:34.785647  658811 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:12:34.785738  658811 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:12:34.785806  658811 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:12:34.786978  658811 out.go:252]   - Generating certificates and keys ...
	I1124 03:12:34.787057  658811 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:12:34.787166  658811 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:12:34.787260  658811 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:12:34.787314  658811 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:12:34.787380  658811 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:12:34.787463  658811 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:12:34.787510  658811 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:12:34.787654  658811 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-284604 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:12:34.787713  658811 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:12:34.787835  658811 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-284604 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:12:34.787929  658811 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:12:34.787996  658811 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:12:34.788075  658811 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:12:34.788161  658811 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:12:34.788246  658811 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:12:34.788307  658811 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:12:34.788377  658811 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:12:34.788464  658811 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:12:34.788510  658811 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:12:34.788574  658811 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:12:34.788677  658811 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:12:34.789842  658811 out.go:252]   - Booting up control plane ...
	I1124 03:12:34.789955  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:12:34.790029  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:12:34.790102  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:12:34.790202  658811 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:12:34.790286  658811 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:12:34.790369  658811 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:12:34.790438  658811 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:12:34.790470  658811 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:12:34.790573  658811 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:12:34.790662  658811 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:12:34.790715  658811 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001939634s
	I1124 03:12:34.790808  658811 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:12:34.790874  658811 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1124 03:12:34.790987  658811 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:12:34.791057  658811 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:12:34.791109  658811 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.83516238s
	I1124 03:12:34.791172  658811 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.120221493s
	I1124 03:12:34.791231  658811 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501624476s
	I1124 03:12:34.791319  658811 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:12:34.791443  658811 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:12:34.791516  658811 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:12:34.791778  658811 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-284604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:12:34.791865  658811 kubeadm.go:319] [bootstrap-token] Using token: 6opk0j.95uwfc60sd8szhpc
	I1124 03:12:34.793026  658811 out.go:252]   - Configuring RBAC rules ...
	I1124 03:12:34.793125  658811 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:12:34.793213  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:12:34.793344  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:12:34.793455  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:12:34.793557  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:12:34.793642  658811 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:12:34.793774  658811 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:12:34.793810  658811 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:12:34.793851  658811 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:12:34.793857  658811 kubeadm.go:319] 
	I1124 03:12:34.793964  658811 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:12:34.793973  658811 kubeadm.go:319] 
	I1124 03:12:34.794046  658811 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:12:34.794053  658811 kubeadm.go:319] 
	I1124 03:12:34.794074  658811 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:12:34.794151  658811 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:12:34.794229  658811 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:12:34.794239  658811 kubeadm.go:319] 
	I1124 03:12:34.794318  658811 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:12:34.794327  658811 kubeadm.go:319] 
	I1124 03:12:34.794375  658811 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:12:34.794381  658811 kubeadm.go:319] 
	I1124 03:12:34.794424  658811 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:12:34.794490  658811 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:12:34.794554  658811 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:12:34.794560  658811 kubeadm.go:319] 
	I1124 03:12:34.794633  658811 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:12:34.794705  658811 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:12:34.794712  658811 kubeadm.go:319] 
	I1124 03:12:34.794781  658811 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6opk0j.95uwfc60sd8szhpc \
	I1124 03:12:34.794955  658811 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:12:34.794990  658811 kubeadm.go:319] 	--control-plane 
	I1124 03:12:34.794996  658811 kubeadm.go:319] 
	I1124 03:12:34.795133  658811 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:12:34.795142  658811 kubeadm.go:319] 
	I1124 03:12:34.795208  658811 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6opk0j.95uwfc60sd8szhpc \
	I1124 03:12:34.795304  658811 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:12:34.795316  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:34.795322  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:34.796503  658811 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1124 03:12:29.901574  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:32.399665  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:32.206353  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:34.206828  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:34.667383  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:35.167626  650744 pod_ready.go:94] pod "coredns-5dd5756b68-5nwx9" is "Ready"
	I1124 03:12:35.167652  650744 pod_ready.go:86] duration metric: took 36.006547637s for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.170471  650744 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.174915  650744 pod_ready.go:94] pod "etcd-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.174952  650744 pod_ready.go:86] duration metric: took 4.460425ms for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.178276  650744 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.181797  650744 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.181815  650744 pod_ready.go:86] duration metric: took 3.521385ms for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.184086  650744 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.364640  650744 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.364666  650744 pod_ready.go:86] duration metric: took 180.561055ms for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.566321  650744 pod_ready.go:83] waiting for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.965760  650744 pod_ready.go:94] pod "kube-proxy-r82jh" is "Ready"
	I1124 03:12:35.965786  650744 pod_ready.go:86] duration metric: took 399.441601ms for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.166112  650744 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.564858  650744 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-579951" is "Ready"
	I1124 03:12:36.564911  650744 pod_ready.go:86] duration metric: took 398.774389ms for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.564927  650744 pod_ready.go:40] duration metric: took 37.40842222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:36.606666  650744 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 03:12:36.609650  650744 out.go:203] 
	W1124 03:12:36.610839  650744 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 03:12:36.611943  650744 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:12:36.613009  650744 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-579951" cluster and "default" namespace by default
	I1124 03:12:34.797545  658811 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:12:34.801904  658811 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:12:34.801919  658811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:12:34.815659  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:12:35.008985  658811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:12:35.009118  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-284604 minikube.k8s.io/updated_at=2025_11_24T03_12_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=embed-certs-284604 minikube.k8s.io/primary=true
	I1124 03:12:35.009137  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:35.019423  658811 ops.go:34] apiserver oom_adj: -16
	I1124 03:12:35.098937  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:35.600025  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:36.099882  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:36.599914  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:37.099714  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:37.599861  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:38.098989  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:38.599248  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.099379  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.599598  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.664570  658811 kubeadm.go:1114] duration metric: took 4.655535544s to wait for elevateKubeSystemPrivileges
	I1124 03:12:39.664621  658811 kubeadm.go:403] duration metric: took 17.206413974s to StartCluster
	I1124 03:12:39.664642  658811 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:39.664720  658811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:39.666858  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:39.667137  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:12:39.667148  658811 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:39.667230  658811 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:39.667331  658811 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-284604"
	I1124 03:12:39.667356  658811 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-284604"
	I1124 03:12:39.667360  658811 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:39.667396  658811 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:12:39.667427  658811 addons.go:70] Setting default-storageclass=true in profile "embed-certs-284604"
	I1124 03:12:39.667451  658811 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-284604"
	I1124 03:12:39.667810  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.667990  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.668614  658811 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:39.670239  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:39.693324  658811 addons.go:239] Setting addon default-storageclass=true in "embed-certs-284604"
	I1124 03:12:39.693377  658811 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:12:39.693617  658811 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 03:12:34.900232  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:36.901987  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:39.399311  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:39.693843  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.695301  658811 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:39.695324  658811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:39.695401  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:39.723273  658811 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:39.723298  658811 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:39.723378  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:39.730678  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:39.746663  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:39.790082  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:12:39.807223  658811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:39.854663  658811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:39.859938  658811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:39.988561  658811 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 03:12:39.990213  658811 node_ready.go:35] waiting up to 6m0s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:12:40.170444  658811 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1124 03:12:36.707151  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:39.206261  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:41.206507  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	I1124 03:12:40.171595  658811 addons.go:530] duration metric: took 504.363947ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:12:40.492653  658811 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-284604" context rescaled to 1 replicas
	W1124 03:12:41.992667  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:43.993353  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:41.399566  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:43.899302  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:43.705614  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:45.706618  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:45.993493  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:47.993708  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:46.399440  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:48.399607  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:48.205812  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:50.206724  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:50.493353  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	I1124 03:12:50.993323  658811 node_ready.go:49] node "embed-certs-284604" is "Ready"
	I1124 03:12:50.993350  658811 node_ready.go:38] duration metric: took 11.003110454s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:12:50.993367  658811 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:50.993411  658811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:51.005273  658811 api_server.go:72] duration metric: took 11.338089025s to wait for apiserver process to appear ...
	I1124 03:12:51.005299  658811 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:51.005319  658811 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:12:51.010460  658811 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:12:51.011346  658811 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:51.011367  658811 api_server.go:131] duration metric: took 6.06186ms to wait for apiserver health ...
	I1124 03:12:51.011376  658811 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:51.014056  658811 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:51.014084  658811 system_pods.go:61] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.014092  658811 system_pods.go:61] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.014101  658811 system_pods.go:61] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.014106  658811 system_pods.go:61] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.014113  658811 system_pods.go:61] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.014119  658811 system_pods.go:61] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.014136  658811 system_pods.go:61] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.014147  658811 system_pods.go:61] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.014155  658811 system_pods.go:74] duration metric: took 2.773001ms to wait for pod list to return data ...
	I1124 03:12:51.014164  658811 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:51.016349  658811 default_sa.go:45] found service account: "default"
	I1124 03:12:51.016366  658811 default_sa.go:55] duration metric: took 2.196577ms for default service account to be created ...
	I1124 03:12:51.016373  658811 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:51.018741  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.018763  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.018768  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.018774  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.018778  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.018783  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.018787  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.018791  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.018798  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.018817  658811 retry.go:31] will retry after 267.963041ms: missing components: kube-dns
	I1124 03:12:51.291183  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.291223  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.291231  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.291239  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.291244  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.291250  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.291255  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.291260  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.291268  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.291295  658811 retry.go:31] will retry after 316.287047ms: missing components: kube-dns
	I1124 03:12:51.610985  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.611019  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.611026  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.611037  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.611045  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.611055  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.611061  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.611066  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.611074  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.611098  658811 retry.go:31] will retry after 440.03042ms: missing components: kube-dns
	I1124 03:12:52.054793  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:52.054821  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:52.054826  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:52.054831  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:52.054835  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:52.054839  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:52.054842  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:52.054845  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:52.054850  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:52.054863  658811 retry.go:31] will retry after 498.386661ms: missing components: kube-dns
	I1124 03:12:52.557040  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:52.557071  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Running
	I1124 03:12:52.557079  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:52.557084  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:52.557089  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:52.557095  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:52.557100  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:52.557104  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:52.557110  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Running
	I1124 03:12:52.557120  658811 system_pods.go:126] duration metric: took 1.540739928s to wait for k8s-apps to be running ...
	I1124 03:12:52.557134  658811 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:52.557188  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:52.570482  658811 system_svc.go:56] duration metric: took 13.341226ms WaitForService to wait for kubelet
	I1124 03:12:52.570511  658811 kubeadm.go:587] duration metric: took 12.903331916s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:52.570535  658811 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:52.573089  658811 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:52.573117  658811 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:52.573148  658811 node_conditions.go:105] duration metric: took 2.605161ms to run NodePressure ...
	I1124 03:12:52.573166  658811 start.go:242] waiting for startup goroutines ...
	I1124 03:12:52.573175  658811 start.go:247] waiting for cluster config update ...
	I1124 03:12:52.573187  658811 start.go:256] writing updated cluster config ...
	I1124 03:12:52.573408  658811 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:52.576899  658811 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:52.580189  658811 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-89mzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.584242  658811 pod_ready.go:94] pod "coredns-66bc5c9577-89mzc" is "Ready"
	I1124 03:12:52.584262  658811 pod_ready.go:86] duration metric: took 4.045428ms for pod "coredns-66bc5c9577-89mzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.586066  658811 pod_ready.go:83] waiting for pod "etcd-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.590045  658811 pod_ready.go:94] pod "etcd-embed-certs-284604" is "Ready"
	I1124 03:12:52.590064  658811 pod_ready.go:86] duration metric: took 3.981268ms for pod "etcd-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.592126  658811 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.595532  658811 pod_ready.go:94] pod "kube-apiserver-embed-certs-284604" is "Ready"
	I1124 03:12:52.595555  658811 pod_ready.go:86] duration metric: took 3.408619ms for pod "kube-apiserver-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.597386  658811 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.980512  658811 pod_ready.go:94] pod "kube-controller-manager-embed-certs-284604" is "Ready"
	I1124 03:12:52.980538  658811 pod_ready.go:86] duration metric: took 383.129867ms for pod "kube-controller-manager-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.181479  658811 pod_ready.go:83] waiting for pod "kube-proxy-bn8fd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.581552  658811 pod_ready.go:94] pod "kube-proxy-bn8fd" is "Ready"
	I1124 03:12:53.581575  658811 pod_ready.go:86] duration metric: took 400.07394ms for pod "kube-proxy-bn8fd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.781409  658811 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.181669  658811 pod_ready.go:94] pod "kube-scheduler-embed-certs-284604" is "Ready"
	I1124 03:12:54.181696  658811 pod_ready.go:86] duration metric: took 400.263506ms for pod "kube-scheduler-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.181712  658811 pod_ready.go:40] duration metric: took 1.604781402s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:54.228480  658811 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:54.231260  658811 out.go:179] * Done! kubectl is now configured to use "embed-certs-284604" cluster and "default" namespace by default
	W1124 03:12:50.399926  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:52.400576  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:52.900171  656542 pod_ready.go:94] pod "coredns-66bc5c9577-w62hm" is "Ready"
	I1124 03:12:52.900193  656542 pod_ready.go:86] duration metric: took 33.505834176s for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.903110  656542 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.907513  656542 pod_ready.go:94] pod "etcd-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:52.907539  656542 pod_ready.go:86] duration metric: took 4.401311ms for pod "etcd-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.909400  656542 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.913156  656542 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:52.913178  656542 pod_ready.go:86] duration metric: took 3.755745ms for pod "kube-apiserver-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.914951  656542 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.098380  656542 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:53.098409  656542 pod_ready.go:86] duration metric: took 183.435612ms for pod "kube-controller-manager-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.298588  656542 pod_ready.go:83] waiting for pod "kube-proxy-xgjzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.698811  656542 pod_ready.go:94] pod "kube-proxy-xgjzs" is "Ready"
	I1124 03:12:53.698835  656542 pod_ready.go:86] duration metric: took 400.225655ms for pod "kube-proxy-xgjzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.898023  656542 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.299083  656542 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:54.299107  656542 pod_ready.go:86] duration metric: took 401.0576ms for pod "kube-scheduler-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.299119  656542 pod_ready.go:40] duration metric: took 34.911155437s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:54.345901  656542 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:54.347541  656542 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-993813" cluster and "default" namespace by default
	W1124 03:12:52.208247  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:54.707505  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	I1124 03:12:56.206822  657716 pod_ready.go:94] pod "coredns-66bc5c9577-9n5xf" is "Ready"
	I1124 03:12:56.206857  657716 pod_ready.go:86] duration metric: took 34.50580389s for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.209449  657716 pod_ready.go:83] waiting for pod "etcd-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.213288  657716 pod_ready.go:94] pod "etcd-no-preload-603010" is "Ready"
	I1124 03:12:56.213310  657716 pod_ready.go:86] duration metric: took 3.839555ms for pod "etcd-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.215450  657716 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.219181  657716 pod_ready.go:94] pod "kube-apiserver-no-preload-603010" is "Ready"
	I1124 03:12:56.219201  657716 pod_ready.go:86] duration metric: took 3.726981ms for pod "kube-apiserver-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.221198  657716 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.404873  657716 pod_ready.go:94] pod "kube-controller-manager-no-preload-603010" is "Ready"
	I1124 03:12:56.404930  657716 pod_ready.go:86] duration metric: took 183.709106ms for pod "kube-controller-manager-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.605567  657716 pod_ready.go:83] waiting for pod "kube-proxy-swj6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.005571  657716 pod_ready.go:94] pod "kube-proxy-swj6c" is "Ready"
	I1124 03:12:57.005598  657716 pod_ready.go:86] duration metric: took 400.0046ms for pod "kube-proxy-swj6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.205842  657716 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.605312  657716 pod_ready.go:94] pod "kube-scheduler-no-preload-603010" is "Ready"
	I1124 03:12:57.605336  657716 pod_ready.go:86] duration metric: took 399.465818ms for pod "kube-scheduler-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.605349  657716 pod_ready.go:40] duration metric: took 35.907419342s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:57.646839  657716 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:57.648681  657716 out.go:179] * Done! kubectl is now configured to use "no-preload-603010" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 03:12:39 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:39.937644213Z" level=info msg="Started container" PID=1780 containerID=2a53443a2e458d26b2cc71c84ea3aa455507f32127c9396c9cc57b3c3f221fff description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc/dashboard-metrics-scraper id=58813a65-06ca-4c3d-ada5-22ffc0e9f19c name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c11264046058ec32796ed66d5a5f539aa2c70db3f84a08174acffea0d9ae4ae
	Nov 24 03:12:40 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:40.01197076Z" level=info msg="Removing container: f8ef1f8a04fdcf7d5bdcdf829f53f2685fc217c3b21d1d97d329b346413f8050" id=27c0c254-13a3-40b8-bbe8-7bb9ced82646 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:12:40 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:40.023547124Z" level=info msg="Removed container f8ef1f8a04fdcf7d5bdcdf829f53f2685fc217c3b21d1d97d329b346413f8050: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc/dashboard-metrics-scraper" id=27c0c254-13a3-40b8-bbe8-7bb9ced82646 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.037246376Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=628bc465-ea33-494f-a52a-4e846d0d73fd name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.03817902Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a7bfc460-020a-40a8-b37a-741687db26c4 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.039227143Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7a11f683-9dc8-49a1-a4ff-389cf3b430b3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.039360672Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.043620789Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.04384565Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7201f8caf35f2c261be5886fec7bf6746c4d8a96af3105a8274cfe986814166f/merged/etc/passwd: no such file or directory"
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.043882019Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7201f8caf35f2c261be5886fec7bf6746c4d8a96af3105a8274cfe986814166f/merged/etc/group: no such file or directory"
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.044506404Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.075554406Z" level=info msg="Created container 573f6a7cb37360f3f1fa23cf31377d96a60d7bd6c0e83b06a385f1fcb540e103: kube-system/storage-provisioner/storage-provisioner" id=7a11f683-9dc8-49a1-a4ff-389cf3b430b3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.076026572Z" level=info msg="Starting container: 573f6a7cb37360f3f1fa23cf31377d96a60d7bd6c0e83b06a385f1fcb540e103" id=51d1b019-fc82-4ec2-8c89-6c668aeb933f name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:12:50 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:12:50.077871675Z" level=info msg="Started container" PID=1794 containerID=573f6a7cb37360f3f1fa23cf31377d96a60d7bd6c0e83b06a385f1fcb540e103 description=kube-system/storage-provisioner/storage-provisioner id=51d1b019-fc82-4ec2-8c89-6c668aeb933f name=/runtime.v1.RuntimeService/StartContainer sandboxID=686fe9ea8a0761a38c8280fefebba5eaf19b0ef59f2c9e330f025c70af33cab3
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.885038451Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0bec34c6-976f-4f98-883d-769ded261286 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.885935881Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c55682c5-c5a3-4ffc-8793-6c5c47fa3042 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.886909446Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc/dashboard-metrics-scraper" id=5eea27b3-b132-4fb1-bee0-c8818ae41919 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.887064638Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.892203692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.892657957Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.925623908Z" level=info msg="Created container ca56ee1046dfd67567192ecb6131590bdde60902b351bebce38f90604b67c2ba: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc/dashboard-metrics-scraper" id=5eea27b3-b132-4fb1-bee0-c8818ae41919 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.926135394Z" level=info msg="Starting container: ca56ee1046dfd67567192ecb6131590bdde60902b351bebce38f90604b67c2ba" id=76cb4486-a5da-46d1-af56-7aa40bccbfc4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:13:00 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:00.928180312Z" level=info msg="Started container" PID=1829 containerID=ca56ee1046dfd67567192ecb6131590bdde60902b351bebce38f90604b67c2ba description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc/dashboard-metrics-scraper id=76cb4486-a5da-46d1-af56-7aa40bccbfc4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=6c11264046058ec32796ed66d5a5f539aa2c70db3f84a08174acffea0d9ae4ae
	Nov 24 03:13:01 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:01.068960759Z" level=info msg="Removing container: 2a53443a2e458d26b2cc71c84ea3aa455507f32127c9396c9cc57b3c3f221fff" id=45e2ef1f-0851-4e43-b26a-1d66b2ab2f43 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:13:01 default-k8s-diff-port-993813 crio[571]: time="2025-11-24T03:13:01.077826021Z" level=info msg="Removed container 2a53443a2e458d26b2cc71c84ea3aa455507f32127c9396c9cc57b3c3f221fff: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc/dashboard-metrics-scraper" id=45e2ef1f-0851-4e43-b26a-1d66b2ab2f43 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	ca56ee1046dfd       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           10 seconds ago      Exited              dashboard-metrics-scraper   3                   6c11264046058       dashboard-metrics-scraper-6ffb444bf9-z8ltc             kubernetes-dashboard
	573f6a7cb3736       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   686fe9ea8a076       storage-provisioner                                    kube-system
	93cf9607a612f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   2a3fe2164e017       kubernetes-dashboard-855c9754f9-6tmlg                  kubernetes-dashboard
	578ada64e7018       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   2def9d1d1de0a       busybox                                                default
	4215d37d945b0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   62500196156fb       coredns-66bc5c9577-w62hm                               kube-system
	1bd7fbd7ac730       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   686fe9ea8a076       storage-provisioner                                    kube-system
	e9aedcc7b2f45       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   0270d3fa8beb2       kindnet-w6sh6                                          kube-system
	98b77ba6e3b6b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   f52f3cc9ad4ab       kube-proxy-xgjzs                                       kube-system
	9d08a55f25f2d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   aaebdbc47c617       kube-apiserver-default-k8s-diff-port-993813            kube-system
	a7d5f73dd018d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   9579ed6acdd5e       kube-scheduler-default-k8s-diff-port-993813            kube-system
	dd990c6cdcef7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   1e9189d8cc74c       kube-controller-manager-default-k8s-diff-port-993813   kube-system
	11357ba44da74       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   9f5bd76a8d024       etcd-default-k8s-diff-port-993813                      kube-system
	
	
	==> coredns [4215d37d945b02ffa680f6a88a284357077e2085850453212142af5a50e8e540] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38348 - 45319 "HINFO IN 5865592854072147901.8469372331643766163. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.484958805s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-993813
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-993813
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=default-k8s-diff-port-993813
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_11_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:11:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-993813
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:12:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:12:58 +0000   Mon, 24 Nov 2025 03:11:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:12:58 +0000   Mon, 24 Nov 2025 03:11:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:12:58 +0000   Mon, 24 Nov 2025 03:11:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:12:58 +0000   Mon, 24 Nov 2025 03:11:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-993813
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                704691fb-a437-4d94-adeb-2d360c12ce3d
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-w62hm                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-default-k8s-diff-port-993813                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-w6sh6                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-default-k8s-diff-port-993813             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-993813    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-xgjzs                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-default-k8s-diff-port-993813             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-z8ltc              0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-6tmlg                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  119s (x8 over 2m)  kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s (x8 over 2m)  kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x8 over 2m)  kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           110s               node-controller  Node default-k8s-diff-port-993813 event: Registered Node default-k8s-diff-port-993813 in Controller
	  Normal  NodeReady                97s                kubelet          Node default-k8s-diff-port-993813 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node default-k8s-diff-port-993813 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                node-controller  Node default-k8s-diff-port-993813 event: Registered Node default-k8s-diff-port-993813 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [11357ba44da7473554ad2b8f1e58b742de06b9155b164b6c83a5d2f9beb7830e] <==
	{"level":"warn","ts":"2025-11-24T03:12:17.098252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.110415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.117608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.126642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.136871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.148433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.160843Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.170419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.180576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.188358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.212288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.218342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.236552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.245316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.260838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:17.325775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:20.503315Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.167522ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T03:12:20.503556Z","caller":"traceutil/trace.go:172","msg":"trace[666866352] range","detail":"{range_begin:/registry/minions; range_end:; response_count:0; response_revision:516; }","duration":"115.42581ms","start":"2025-11-24T03:12:20.388114Z","end":"2025-11-24T03:12:20.503540Z","steps":["trace[666866352] 'agreement among raft nodes before linearized reading'  (duration: 53.243914ms)","trace[666866352] 'range keys from in-memory index tree'  (duration: 61.891896ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T03:12:20.503462Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.164758ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-w62hm\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-24T03:12:20.503674Z","caller":"traceutil/trace.go:172","msg":"trace[1529185405] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-w62hm; range_end:; response_count:1; response_revision:517; }","duration":"107.369154ms","start":"2025-11-24T03:12:20.396285Z","end":"2025-11-24T03:12:20.503654Z","steps":["trace[1529185405] 'agreement among raft nodes before linearized reading'  (duration: 107.079744ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T03:12:20.503446Z","caller":"traceutil/trace.go:172","msg":"trace[1306895237] transaction","detail":"{read_only:false; response_revision:517; number_of_response:1; }","duration":"115.98788ms","start":"2025-11-24T03:12:20.387430Z","end":"2025-11-24T03:12:20.503418Z","steps":["trace[1306895237] 'process raft request'  (duration: 53.983981ms)","trace[1306895237] 'compare'  (duration: 61.856366ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:20.810566Z","caller":"traceutil/trace.go:172","msg":"trace[2065875771] linearizableReadLoop","detail":"{readStateIndex:553; appliedIndex:553; }","duration":"107.206381ms","start":"2025-11-24T03:12:20.703333Z","end":"2025-11-24T03:12:20.810539Z","steps":["trace[2065875771] 'read index received'  (duration: 107.199251ms)","trace[2065875771] 'applied index is now lower than readState.Index'  (duration: 6.474µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T03:12:20.873454Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"170.089505ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/disruption-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-11-24T03:12:20.873545Z","caller":"traceutil/trace.go:172","msg":"trace[53039755] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/disruption-controller; range_end:; response_count:1; response_revision:525; }","duration":"170.199895ms","start":"2025-11-24T03:12:20.703330Z","end":"2025-11-24T03:12:20.873530Z","steps":["trace[53039755] 'agreement among raft nodes before linearized reading'  (duration: 107.290629ms)","trace[53039755] 'range keys from in-memory index tree'  (duration: 62.698852ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:20.873662Z","caller":"traceutil/trace.go:172","msg":"trace[298306434] transaction","detail":"{read_only:false; response_revision:526; number_of_response:1; }","duration":"172.208081ms","start":"2025-11-24T03:12:20.701436Z","end":"2025-11-24T03:12:20.873644Z","steps":["trace[298306434] 'process raft request'  (duration: 109.179988ms)","trace[298306434] 'compare'  (duration: 62.857532ms)"],"step_count":2}
	
	
	==> kernel <==
	 03:13:11 up  1:55,  0 user,  load average: 4.55, 4.15, 2.73
	Linux default-k8s-diff-port-993813 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e9aedcc7b2f459c0aa678060a0430af50f95c9ae8cc09573789ea82fcb7fafac] <==
	I1124 03:12:19.517827       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:12:19.518099       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 03:12:19.518258       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:12:19.518277       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:12:19.518321       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:12:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:12:19.723290       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:12:19.723319       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:12:19.723331       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:12:19.723739       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:12:20.023559       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:12:20.023594       1 metrics.go:72] Registering metrics
	I1124 03:12:20.023657       1 controller.go:711] "Syncing nftables rules"
	I1124 03:12:29.724849       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:12:29.726069       1 main.go:301] handling current node
	I1124 03:12:39.730016       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:12:39.730057       1 main.go:301] handling current node
	I1124 03:12:49.723125       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:12:49.723163       1 main.go:301] handling current node
	I1124 03:12:59.723060       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:12:59.723096       1 main.go:301] handling current node
	I1124 03:13:09.723238       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:13:09.723275       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9d08a55f25f2dea3825f91aa9365cd9a3093b4582eda50f03b22c2ecc00f92c6] <==
	I1124 03:12:18.047818       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 03:12:18.047844       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 03:12:18.047878       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:12:18.048106       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 03:12:18.048163       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 03:12:18.052552       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 03:12:18.052963       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 03:12:18.053115       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:12:18.060329       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 03:12:18.062946       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:12:18.067768       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 03:12:18.070063       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 03:12:18.070156       1 policy_source.go:240] refreshing policies
	I1124 03:12:18.098962       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:12:18.531798       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:12:18.579924       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:12:18.607630       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:12:18.619260       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:12:18.628405       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:12:18.677331       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.79.15"}
	I1124 03:12:18.695874       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.216.245"}
	I1124 03:12:18.942726       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:12:21.549322       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:12:21.700529       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:12:21.897849       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [dd990c6cdcef7e1e7305b9fc20b7615dfb761cbe8c5d42a1f61c8b41406cd0a7] <==
	I1124 03:12:21.386946       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 03:12:21.387012       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:12:21.390186       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:12:21.392565       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 03:12:21.392648       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:12:21.394846       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:12:21.396497       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:12:21.397581       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 03:12:21.400285       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 03:12:21.402536       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 03:12:21.403734       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 03:12:21.407094       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 03:12:21.407255       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 03:12:21.407367       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 03:12:21.407435       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 03:12:21.407474       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 03:12:21.409409       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 03:12:21.410771       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:12:21.415999       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:12:21.419248       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:12:21.419336       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:12:21.419357       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:12:21.419369       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:12:21.442788       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:12:21.446852       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	
	
	==> kube-proxy [98b77ba6e3b6b9a9bb0fd551092cc96efbc1de2ae458e7b1cda2d0aa23b17186] <==
	I1124 03:12:19.314263       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:12:19.380232       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:12:19.480866       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:12:19.480943       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 03:12:19.481014       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:12:19.501376       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:12:19.501517       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:12:19.507214       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:12:19.507765       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:12:19.507841       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:12:19.509500       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:12:19.509540       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:12:19.509694       1 config.go:309] "Starting node config controller"
	I1124 03:12:19.509722       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:12:19.510405       1 config.go:200] "Starting service config controller"
	I1124 03:12:19.510416       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:12:19.510508       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:12:19.510518       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:12:19.610060       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:12:19.611161       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:12:19.611372       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:12:19.611465       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a7d5f73dd018d0ee98a3ef524ea2c83739dc8c07b34a7298ffb1d288db659329] <==
	I1124 03:12:15.485204       1 serving.go:386] Generated self-signed cert in-memory
	W1124 03:12:17.973201       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 03:12:17.973248       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 03:12:17.973260       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 03:12:17.973270       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 03:12:18.025611       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 03:12:18.025645       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:12:18.028552       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:12:18.028636       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:12:18.032698       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 03:12:18.032787       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:12:18.129180       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:12:22 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:22.491651     723 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 03:12:24 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:24.957143     723 scope.go:117] "RemoveContainer" containerID="e1d1fde154b8d5e5df9cfa39e9674178a4b900188ee3ff7569088cb072f84098"
	Nov 24 03:12:25 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:25.962856     723 scope.go:117] "RemoveContainer" containerID="e1d1fde154b8d5e5df9cfa39e9674178a4b900188ee3ff7569088cb072f84098"
	Nov 24 03:12:25 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:25.963375     723 scope.go:117] "RemoveContainer" containerID="f8ef1f8a04fdcf7d5bdcdf829f53f2685fc217c3b21d1d97d329b346413f8050"
	Nov 24 03:12:25 default-k8s-diff-port-993813 kubelet[723]: E1124 03:12:25.964017     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z8ltc_kubernetes-dashboard(7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc" podUID="7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6"
	Nov 24 03:12:26 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:26.968782     723 scope.go:117] "RemoveContainer" containerID="f8ef1f8a04fdcf7d5bdcdf829f53f2685fc217c3b21d1d97d329b346413f8050"
	Nov 24 03:12:26 default-k8s-diff-port-993813 kubelet[723]: E1124 03:12:26.969002     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z8ltc_kubernetes-dashboard(7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc" podUID="7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6"
	Nov 24 03:12:27 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:27.972507     723 scope.go:117] "RemoveContainer" containerID="f8ef1f8a04fdcf7d5bdcdf829f53f2685fc217c3b21d1d97d329b346413f8050"
	Nov 24 03:12:27 default-k8s-diff-port-993813 kubelet[723]: E1124 03:12:27.972706     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z8ltc_kubernetes-dashboard(7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc" podUID="7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6"
	Nov 24 03:12:31 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:31.118425     723 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-6tmlg" podStartSLOduration=2.129583617 podStartE2EDuration="9.11840663s" podCreationTimestamp="2025-11-24 03:12:22 +0000 UTC" firstStartedPulling="2025-11-24 03:12:22.3577483 +0000 UTC m=+8.641628160" lastFinishedPulling="2025-11-24 03:12:29.34657131 +0000 UTC m=+15.630451173" observedRunningTime="2025-11-24 03:12:29.996959599 +0000 UTC m=+16.280839471" watchObservedRunningTime="2025-11-24 03:12:31.11840663 +0000 UTC m=+17.402286500"
	Nov 24 03:12:39 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:39.884603     723 scope.go:117] "RemoveContainer" containerID="f8ef1f8a04fdcf7d5bdcdf829f53f2685fc217c3b21d1d97d329b346413f8050"
	Nov 24 03:12:40 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:40.009770     723 scope.go:117] "RemoveContainer" containerID="f8ef1f8a04fdcf7d5bdcdf829f53f2685fc217c3b21d1d97d329b346413f8050"
	Nov 24 03:12:40 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:40.010035     723 scope.go:117] "RemoveContainer" containerID="2a53443a2e458d26b2cc71c84ea3aa455507f32127c9396c9cc57b3c3f221fff"
	Nov 24 03:12:40 default-k8s-diff-port-993813 kubelet[723]: E1124 03:12:40.010267     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z8ltc_kubernetes-dashboard(7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc" podUID="7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6"
	Nov 24 03:12:47 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:47.161181     723 scope.go:117] "RemoveContainer" containerID="2a53443a2e458d26b2cc71c84ea3aa455507f32127c9396c9cc57b3c3f221fff"
	Nov 24 03:12:47 default-k8s-diff-port-993813 kubelet[723]: E1124 03:12:47.161345     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z8ltc_kubernetes-dashboard(7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc" podUID="7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6"
	Nov 24 03:12:50 default-k8s-diff-port-993813 kubelet[723]: I1124 03:12:50.036817     723 scope.go:117] "RemoveContainer" containerID="1bd7fbd7ac7308bdb9bfcef37d44d50f647796051adcb416cecd8027eff0b98e"
	Nov 24 03:13:00 default-k8s-diff-port-993813 kubelet[723]: I1124 03:13:00.884491     723 scope.go:117] "RemoveContainer" containerID="2a53443a2e458d26b2cc71c84ea3aa455507f32127c9396c9cc57b3c3f221fff"
	Nov 24 03:13:01 default-k8s-diff-port-993813 kubelet[723]: I1124 03:13:01.067565     723 scope.go:117] "RemoveContainer" containerID="2a53443a2e458d26b2cc71c84ea3aa455507f32127c9396c9cc57b3c3f221fff"
	Nov 24 03:13:01 default-k8s-diff-port-993813 kubelet[723]: I1124 03:13:01.067808     723 scope.go:117] "RemoveContainer" containerID="ca56ee1046dfd67567192ecb6131590bdde60902b351bebce38f90604b67c2ba"
	Nov 24 03:13:01 default-k8s-diff-port-993813 kubelet[723]: E1124 03:13:01.068137     723 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-z8ltc_kubernetes-dashboard(7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-z8ltc" podUID="7f9d2ca0-e48e-4a9a-813a-f7b8bb36c8b6"
	Nov 24 03:13:06 default-k8s-diff-port-993813 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 03:13:06 default-k8s-diff-port-993813 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 03:13:06 default-k8s-diff-port-993813 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 03:13:06 default-k8s-diff-port-993813 systemd[1]: kubelet.service: Consumed 1.621s CPU time.
	
	
	==> kubernetes-dashboard [93cf9607a612fd45cf69895841118ca18e88cd31bd1ae578c8b2d22db2c14cad] <==
	2025/11/24 03:12:29 Starting overwatch
	2025/11/24 03:12:29 Using namespace: kubernetes-dashboard
	2025/11/24 03:12:29 Using in-cluster config to connect to apiserver
	2025/11/24 03:12:29 Using secret token for csrf signing
	2025/11/24 03:12:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 03:12:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 03:12:29 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 03:12:29 Generating JWE encryption key
	2025/11/24 03:12:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 03:12:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 03:12:29 Initializing JWE encryption key from synchronized object
	2025/11/24 03:12:29 Creating in-cluster Sidecar client
	2025/11/24 03:12:29 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 03:12:29 Serving insecurely on HTTP port: 9090
	2025/11/24 03:12:59 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1bd7fbd7ac7308bdb9bfcef37d44d50f647796051adcb416cecd8027eff0b98e] <==
	I1124 03:12:19.278026       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 03:12:49.282494       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [573f6a7cb37360f3f1fa23cf31377d96a60d7bd6c0e83b06a385f1fcb540e103] <==
	I1124 03:12:50.089868       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:12:50.097398       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:12:50.097458       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:12:50.099333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:53.554672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:57.815397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:01.414061       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:04.468069       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:07.489986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:07.495821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:13:07.495992       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:13:07.496147       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-993813_f542bdb1-ded6-45e0-9622-2372d8336bb7!
	I1124 03:13:07.496153       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3f822d05-a76f-4ae4-9301-4b0cf90b6f0e", APIVersion:"v1", ResourceVersion:"670", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-993813_f542bdb1-ded6-45e0-9622-2372d8336bb7 became leader
	W1124 03:13:07.498653       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:07.502141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:13:07.596377       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-993813_f542bdb1-ded6-45e0-9622-2372d8336bb7!
	W1124 03:13:09.505341       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:09.509108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:11.512560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:11.517796       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993813 -n default-k8s-diff-port-993813
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-993813 -n default-k8s-diff-port-993813: exit status 2 (344.669533ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-993813 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-603010 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-603010 --alsologtostderr -v=1: exit status 80 (1.887950565s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-603010 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:13:09.370829  670150 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:13:09.371085  670150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:09.371093  670150 out.go:374] Setting ErrFile to fd 2...
	I1124 03:13:09.371097  670150 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:09.371287  670150 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:13:09.371541  670150 out.go:368] Setting JSON to false
	I1124 03:13:09.371564  670150 mustload.go:66] Loading cluster: no-preload-603010
	I1124 03:13:09.371927  670150 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:13:09.372295  670150 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:13:09.390883  670150 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:13:09.391131  670150 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:09.449932  670150 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:69 OomKillDisable:false NGoroutines:83 SystemTime:2025-11-24 03:13:09.439785895 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:13:09.450475  670150 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763935228-21975/minikube-v1.37.0-1763935228-21975-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763935228-21975-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-603010 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 03:13:09.452138  670150 out.go:179] * Pausing node no-preload-603010 ... 
	I1124 03:13:09.453252  670150 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:13:09.453577  670150 ssh_runner.go:195] Run: systemctl --version
	I1124 03:13:09.453626  670150 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:13:09.472208  670150 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:13:09.573109  670150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:13:09.595061  670150 pause.go:52] kubelet running: true
	I1124 03:13:09.595124  670150 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:13:09.764250  670150 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:13:09.764347  670150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:13:09.849028  670150 cri.go:89] found id: "2a23c4740fd8a0b86f68bdde06ff7fc26aef5bd492c29ae3555a8b8bd1103d39"
	I1124 03:13:09.849052  670150 cri.go:89] found id: "3d408a41820da3c6cec44b2639564b549a6b0a8af9e865107309ce3c569dd8b2"
	I1124 03:13:09.849058  670150 cri.go:89] found id: "0538658dae8eeb1e72082ae5de429b78aaf9874931620b324b5b39bcd20d564e"
	I1124 03:13:09.849064  670150 cri.go:89] found id: "ba401cc056a953c5699c15cbf074185bee5218833058db0fed286d0270ae02ba"
	I1124 03:13:09.849068  670150 cri.go:89] found id: "3072e8ebabeb4373de4efeab47db549507d3ee4e0654e8677138ab8f8c18ece3"
	I1124 03:13:09.849073  670150 cri.go:89] found id: "3dc1c0625a30c329549c40b7e0c43d1ab4d818ccc0fb85ea668066af23a327d3"
	I1124 03:13:09.849077  670150 cri.go:89] found id: "4e8e84f339bed2139577a34b2e6715ab1a8d9a3b425a16886245d61781115cf0"
	I1124 03:13:09.849081  670150 cri.go:89] found id: "7294ac6825ca8afcd936803374aa461bcb4d637ea8743600784037c5acb225b2"
	I1124 03:13:09.849086  670150 cri.go:89] found id: "767a9908e75937581bb6f9fd527e760a401658cde3d3cf28bc2b66613eab1027"
	I1124 03:13:09.849095  670150 cri.go:89] found id: "2a433efd31d3741cd08e9d42cea0074c52a30b507785c1ebf2784f0471b4862c"
	I1124 03:13:09.849104  670150 cri.go:89] found id: "e35e4778c80df433ced61266b491a3bff7391fc67271709f5ef3f7509c962a42"
	I1124 03:13:09.849109  670150 cri.go:89] found id: ""
	I1124 03:13:09.849153  670150 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:13:09.862341  670150 retry.go:31] will retry after 242.152438ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:13:09Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:13:10.104764  670150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:13:10.118690  670150 pause.go:52] kubelet running: false
	I1124 03:13:10.118741  670150 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:13:10.296632  670150 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:13:10.296701  670150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:13:10.377720  670150 cri.go:89] found id: "2a23c4740fd8a0b86f68bdde06ff7fc26aef5bd492c29ae3555a8b8bd1103d39"
	I1124 03:13:10.377746  670150 cri.go:89] found id: "3d408a41820da3c6cec44b2639564b549a6b0a8af9e865107309ce3c569dd8b2"
	I1124 03:13:10.377751  670150 cri.go:89] found id: "0538658dae8eeb1e72082ae5de429b78aaf9874931620b324b5b39bcd20d564e"
	I1124 03:13:10.377755  670150 cri.go:89] found id: "ba401cc056a953c5699c15cbf074185bee5218833058db0fed286d0270ae02ba"
	I1124 03:13:10.377758  670150 cri.go:89] found id: "3072e8ebabeb4373de4efeab47db549507d3ee4e0654e8677138ab8f8c18ece3"
	I1124 03:13:10.377768  670150 cri.go:89] found id: "3dc1c0625a30c329549c40b7e0c43d1ab4d818ccc0fb85ea668066af23a327d3"
	I1124 03:13:10.377773  670150 cri.go:89] found id: "4e8e84f339bed2139577a34b2e6715ab1a8d9a3b425a16886245d61781115cf0"
	I1124 03:13:10.377777  670150 cri.go:89] found id: "7294ac6825ca8afcd936803374aa461bcb4d637ea8743600784037c5acb225b2"
	I1124 03:13:10.377781  670150 cri.go:89] found id: "767a9908e75937581bb6f9fd527e760a401658cde3d3cf28bc2b66613eab1027"
	I1124 03:13:10.377800  670150 cri.go:89] found id: "2a433efd31d3741cd08e9d42cea0074c52a30b507785c1ebf2784f0471b4862c"
	I1124 03:13:10.377805  670150 cri.go:89] found id: "e35e4778c80df433ced61266b491a3bff7391fc67271709f5ef3f7509c962a42"
	I1124 03:13:10.377810  670150 cri.go:89] found id: ""
	I1124 03:13:10.377858  670150 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:13:10.391502  670150 retry.go:31] will retry after 546.683827ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:13:10Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:13:10.939003  670150 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:13:10.952513  670150 pause.go:52] kubelet running: false
	I1124 03:13:10.952580  670150 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:13:11.102179  670150 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:13:11.102269  670150 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:13:11.170181  670150 cri.go:89] found id: "2a23c4740fd8a0b86f68bdde06ff7fc26aef5bd492c29ae3555a8b8bd1103d39"
	I1124 03:13:11.170200  670150 cri.go:89] found id: "3d408a41820da3c6cec44b2639564b549a6b0a8af9e865107309ce3c569dd8b2"
	I1124 03:13:11.170207  670150 cri.go:89] found id: "0538658dae8eeb1e72082ae5de429b78aaf9874931620b324b5b39bcd20d564e"
	I1124 03:13:11.170218  670150 cri.go:89] found id: "ba401cc056a953c5699c15cbf074185bee5218833058db0fed286d0270ae02ba"
	I1124 03:13:11.170221  670150 cri.go:89] found id: "3072e8ebabeb4373de4efeab47db549507d3ee4e0654e8677138ab8f8c18ece3"
	I1124 03:13:11.170226  670150 cri.go:89] found id: "3dc1c0625a30c329549c40b7e0c43d1ab4d818ccc0fb85ea668066af23a327d3"
	I1124 03:13:11.170230  670150 cri.go:89] found id: "4e8e84f339bed2139577a34b2e6715ab1a8d9a3b425a16886245d61781115cf0"
	I1124 03:13:11.170234  670150 cri.go:89] found id: "7294ac6825ca8afcd936803374aa461bcb4d637ea8743600784037c5acb225b2"
	I1124 03:13:11.170238  670150 cri.go:89] found id: "767a9908e75937581bb6f9fd527e760a401658cde3d3cf28bc2b66613eab1027"
	I1124 03:13:11.170246  670150 cri.go:89] found id: "2a433efd31d3741cd08e9d42cea0074c52a30b507785c1ebf2784f0471b4862c"
	I1124 03:13:11.170251  670150 cri.go:89] found id: "e35e4778c80df433ced61266b491a3bff7391fc67271709f5ef3f7509c962a42"
	I1124 03:13:11.170255  670150 cri.go:89] found id: ""
	I1124 03:13:11.170300  670150 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:13:11.184198  670150 out.go:203] 
	W1124 03:13:11.185503  670150 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:13:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:13:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:13:11.185520  670150 out.go:285] * 
	* 
	W1124 03:13:11.190096  670150 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:13:11.191254  670150 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-603010 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-603010
helpers_test.go:243: (dbg) docker inspect no-preload-603010:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845",
	        "Created": "2025-11-24T03:10:43.847831353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 658004,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:12:06.634961378Z",
	            "FinishedAt": "2025-11-24T03:12:05.766626103Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845/hostname",
	        "HostsPath": "/var/lib/docker/containers/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845/hosts",
	        "LogPath": "/var/lib/docker/containers/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845-json.log",
	        "Name": "/no-preload-603010",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-603010:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-603010",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845",
	                "LowerDir": "/var/lib/docker/overlay2/883ce309da55410098919db1c8e27882341f46628e370779f1e1f648adf5829e-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/883ce309da55410098919db1c8e27882341f46628e370779f1e1f648adf5829e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/883ce309da55410098919db1c8e27882341f46628e370779f1e1f648adf5829e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/883ce309da55410098919db1c8e27882341f46628e370779f1e1f648adf5829e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-603010",
	                "Source": "/var/lib/docker/volumes/no-preload-603010/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-603010",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-603010",
	                "name.minikube.sigs.k8s.io": "no-preload-603010",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "281ef9322f023703a943204ee9ffc8d29e01369033d640a5c45ee0792c21fb26",
	            "SandboxKey": "/var/run/docker/netns/281ef9322f02",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-603010": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6fb41680caede660e77e75cbbc4bea8a2931e68f7736aa43850d10472e9557bd",
	                    "EndpointID": "ba54ef7afa710fa53c8fb56a6f238e95db2d97a616cc792bb45634538f8d22bd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "fe:1d:9c:72:30:d9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-603010",
	                        "6cf4d6c6dc34"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603010 -n no-preload-603010
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603010 -n no-preload-603010: exit status 2 (336.708688ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-603010 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-603010 logs -n 25: (1.137774303s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p no-preload-603010 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-579951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p old-k8s-version-579951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-438041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ image   │ newest-cni-438041 image list --format=json                                                                                                                                                                                                    │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p newest-cni-438041 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p no-preload-603010 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                                                                                          │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p no-preload-603010 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                                                                                          │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p disable-driver-mounts-242597                                                                                                                                                                                                               │ disable-driver-mounts-242597 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ image   │ old-k8s-version-579951 image list --format=json                                                                                                                                                                                               │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p old-k8s-version-579951 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ -p old-k8s-version-579951                                                                                                                                                                                                                     │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p old-k8s-version-579951                                                                                                                                                                                                                     │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable metrics-server -p embed-certs-284604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ stop    │ -p embed-certs-284604 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ image   │ default-k8s-diff-port-993813 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ pause   │ -p default-k8s-diff-port-993813 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ image   │ no-preload-603010 image list --format=json                                                                                                                                                                                                    │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ pause   │ -p no-preload-603010 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:12:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:12:09.055015  658811 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:12:09.055230  658811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:09.055247  658811 out.go:374] Setting ErrFile to fd 2...
	I1124 03:12:09.055253  658811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:09.055468  658811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:12:09.055909  658811 out.go:368] Setting JSON to false
	I1124 03:12:09.056956  658811 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6876,"bootTime":1763947053,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:12:09.057009  658811 start.go:143] virtualization: kvm guest
	I1124 03:12:09.058671  658811 out.go:179] * [embed-certs-284604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:12:09.059850  658811 notify.go:221] Checking for updates...
	I1124 03:12:09.059855  658811 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:12:09.061128  658811 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:12:09.062317  658811 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:09.063358  658811 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:12:09.064255  658811 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:12:09.065078  658811 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:12:09.066407  658811 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.066509  658811 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.066589  658811 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:12:09.066666  658811 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:12:09.089713  658811 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:12:09.089855  658811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:09.145948  658811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:12:09.135562124 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:09.146071  658811 docker.go:319] overlay module found
	I1124 03:12:09.147708  658811 out.go:179] * Using the docker driver based on user configuration
	I1124 03:12:09.148714  658811 start.go:309] selected driver: docker
	I1124 03:12:09.148737  658811 start.go:927] validating driver "docker" against <nil>
	I1124 03:12:09.148747  658811 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:12:09.149338  658811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:09.210343  658811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:12:09.200351707 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:09.210534  658811 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:12:09.210794  658811 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:09.212381  658811 out.go:179] * Using Docker driver with root privileges
	I1124 03:12:09.213398  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:09.213482  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:09.213497  658811 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:12:09.213574  658811 start.go:353] cluster config:
	{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:09.214730  658811 out.go:179] * Starting "embed-certs-284604" primary control-plane node in "embed-certs-284604" cluster
	I1124 03:12:09.215613  658811 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:12:09.216663  658811 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:12:09.217654  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:09.217694  658811 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:12:09.217703  658811 cache.go:65] Caching tarball of preloaded images
	I1124 03:12:09.217732  658811 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:12:09.217791  658811 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:12:09.217808  658811 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:12:09.217977  658811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:12:09.218021  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json: {Name:mkd4898576ebe0ebf6d2ca35fddd33eac8f127df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:09.239944  658811 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:12:09.239962  658811 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:12:09.239976  658811 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:12:09.240004  658811 start.go:360] acquireMachinesLock for embed-certs-284604: {Name:mkd39be5908e1d289ed5af40b6c2b1c510beffd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:12:09.240088  658811 start.go:364] duration metric: took 68.665µs to acquireMachinesLock for "embed-certs-284604"
	I1124 03:12:09.240109  658811 start.go:93] Provisioning new machine with config: &{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:09.240182  658811 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:12:05.014758  656542 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-993813" ...
	I1124 03:12:05.014805  656542 cli_runner.go:164] Run: docker start default-k8s-diff-port-993813
	I1124 03:12:05.297424  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:05.316835  656542 kic.go:430] container "default-k8s-diff-port-993813" state is running.
	I1124 03:12:05.317309  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:05.336690  656542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/config.json ...
	I1124 03:12:05.336923  656542 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:05.336992  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:05.356564  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:05.356863  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:05.356907  656542 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:05.357642  656542 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39256->127.0.0.1:33488: read: connection reset by peer
	I1124 03:12:08.497704  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:12:08.497744  656542 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-993813"
	I1124 03:12:08.497799  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:08.516284  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:08.516620  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:08.516642  656542 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993813 && echo "default-k8s-diff-port-993813" | sudo tee /etc/hostname
	I1124 03:12:08.664299  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:12:08.664399  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:08.683215  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:08.683424  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:08.683440  656542 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993813/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:08.824495  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:08.824534  656542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:08.824571  656542 ubuntu.go:190] setting up certificates
	I1124 03:12:08.824597  656542 provision.go:84] configureAuth start
	I1124 03:12:08.824659  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:08.842592  656542 provision.go:143] copyHostCerts
	I1124 03:12:08.842639  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:08.842651  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:08.842701  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:08.842805  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:08.842813  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:08.842838  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:08.842940  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:08.842950  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:08.842981  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:08.843051  656542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993813 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-993813 localhost minikube]
	I1124 03:12:08.993088  656542 provision.go:177] copyRemoteCerts
	I1124 03:12:08.993141  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:08.993180  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.010481  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.112610  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:09.134182  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 03:12:09.153393  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:12:09.173516  656542 provision.go:87] duration metric: took 348.902104ms to configureAuth
	I1124 03:12:09.173547  656542 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:09.173717  656542 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.173820  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.195519  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:09.195738  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:09.195756  656542 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:09.551404  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:09.551434  656542 machine.go:97] duration metric: took 4.214494542s to provisionDockerMachine
	I1124 03:12:09.551449  656542 start.go:293] postStartSetup for "default-k8s-diff-port-993813" (driver="docker")
	I1124 03:12:09.551463  656542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:09.551533  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:09.551574  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.572440  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.684044  656542 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:09.688328  656542 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:09.688354  656542 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:09.688365  656542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:09.688414  656542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:09.688488  656542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:09.688660  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:09.696023  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:09.725715  656542 start.go:296] duration metric: took 174.248037ms for postStartSetup
	I1124 03:12:09.725795  656542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:09.725851  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.747235  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:06.610202  657716 out.go:252] * Restarting existing docker container for "no-preload-603010" ...
	I1124 03:12:06.610267  657716 cli_runner.go:164] Run: docker start no-preload-603010
	I1124 03:12:06.895418  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:06.913279  657716 kic.go:430] container "no-preload-603010" state is running.
	I1124 03:12:06.913694  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:06.931543  657716 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/config.json ...
	I1124 03:12:06.931779  657716 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:06.931840  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:06.949180  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:06.949422  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:06.949436  657716 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:06.950106  657716 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53738->127.0.0.1:33493: read: connection reset by peer
	I1124 03:12:10.094410  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-603010
	
	I1124 03:12:10.094455  657716 ubuntu.go:182] provisioning hostname "no-preload-603010"
	I1124 03:12:10.094548  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.117277  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.117614  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.117637  657716 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-603010 && echo "no-preload-603010" | sudo tee /etc/hostname
	I1124 03:12:10.272082  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-603010
	
	I1124 03:12:10.272162  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.293197  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.293525  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.293557  657716 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-603010' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-603010/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-603010' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:10.440289  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:10.440322  657716 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:10.440350  657716 ubuntu.go:190] setting up certificates
	I1124 03:12:10.440374  657716 provision.go:84] configureAuth start
	I1124 03:12:10.440443  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:10.458672  657716 provision.go:143] copyHostCerts
	I1124 03:12:10.458743  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:10.458766  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:10.458857  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:10.459021  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:10.459037  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:10.459080  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:10.459183  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:10.459195  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:10.459232  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:10.459323  657716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.no-preload-603010 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-603010]
	I1124 03:12:10.546420  657716 provision.go:177] copyRemoteCerts
	I1124 03:12:10.546503  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:10.546552  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.564799  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:10.669343  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:10.687953  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:12:10.707320  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:12:10.728398  657716 provision.go:87] duration metric: took 288.002675ms to configureAuth
	I1124 03:12:10.728450  657716 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:10.728791  657716 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:10.728992  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.754544  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.754857  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.754907  657716 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:09.846210  656542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:09.851045  656542 fix.go:56] duration metric: took 4.853815531s for fixHost
	I1124 03:12:09.851067  656542 start.go:83] releasing machines lock for "default-k8s-diff-port-993813", held for 4.853861223s
	I1124 03:12:09.851139  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:09.871679  656542 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:09.871744  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.871767  656542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:09.871859  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.897665  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.897832  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.996390  656542 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:10.070447  656542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:10.108350  656542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:10.113659  656542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:10.113732  656542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:10.122258  656542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:12:10.122274  656542 start.go:496] detecting cgroup driver to use...
	I1124 03:12:10.122301  656542 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:10.122333  656542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:10.138420  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:10.151623  656542 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:10.151696  656542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:10.169717  656542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:10.185403  656542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:10.268937  656542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:10.361626  656542 docker.go:234] disabling docker service ...
	I1124 03:12:10.361713  656542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:10.376259  656542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:10.389709  656542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:10.493317  656542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:10.581163  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:10.594309  656542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:10.608489  656542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:10.608559  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.618090  656542 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:10.618147  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.629142  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.639755  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.648289  656542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:10.657390  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.667835  656542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.677148  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.686554  656542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:10.694262  656542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:10.701983  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:10.784645  656542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:13.176259  656542 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.391580237s)
	I1124 03:12:13.176297  656542 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:13.176344  656542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:13.182771  656542 start.go:564] Will wait 60s for crictl version
	I1124 03:12:13.182920  656542 ssh_runner.go:195] Run: which crictl
	I1124 03:12:13.188282  656542 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:13.221129  656542 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:13.221208  656542 ssh_runner.go:195] Run: crio --version
	I1124 03:12:13.256022  656542 ssh_runner.go:195] Run: crio --version
	I1124 03:12:13.289098  656542 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1124 03:12:09.667322  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:11.810684  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:09.241811  658811 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:12:09.242074  658811 start.go:159] libmachine.API.Create for "embed-certs-284604" (driver="docker")
	I1124 03:12:09.242107  658811 client.go:173] LocalClient.Create starting
	I1124 03:12:09.242186  658811 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 03:12:09.242224  658811 main.go:143] libmachine: Decoding PEM data...
	I1124 03:12:09.242246  658811 main.go:143] libmachine: Parsing certificate...
	I1124 03:12:09.242326  658811 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 03:12:09.242354  658811 main.go:143] libmachine: Decoding PEM data...
	I1124 03:12:09.242374  658811 main.go:143] libmachine: Parsing certificate...
	I1124 03:12:09.242824  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:12:09.259427  658811 cli_runner.go:211] docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:12:09.259477  658811 network_create.go:284] running [docker network inspect embed-certs-284604] to gather additional debugging logs...
	I1124 03:12:09.259492  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604
	W1124 03:12:09.275004  658811 cli_runner.go:211] docker network inspect embed-certs-284604 returned with exit code 1
	I1124 03:12:09.275029  658811 network_create.go:287] error running [docker network inspect embed-certs-284604]: docker network inspect embed-certs-284604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-284604 not found
	I1124 03:12:09.275039  658811 network_create.go:289] output of [docker network inspect embed-certs-284604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-284604 not found
	
	** /stderr **
	I1124 03:12:09.275132  658811 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:09.292074  658811 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-58688175ab6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:9d:6a:fc:5c:13} reservation:<nil>}
	I1124 03:12:09.292745  658811 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf033568456f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:35:42:16:23:28} reservation:<nil>}
	I1124 03:12:09.293207  658811 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ecb12099844 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ce:07:19:c6:91:6e} reservation:<nil>}
	I1124 03:12:09.293801  658811 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-50b2e4e61586 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:0d:88:19:7d:df} reservation:<nil>}
	I1124 03:12:09.294406  658811 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6fb41680caed IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:a0:6c:44:95:b2} reservation:<nil>}
	I1124 03:12:09.295273  658811 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eef7f0}
	I1124 03:12:09.295296  658811 network_create.go:124] attempt to create docker network embed-certs-284604 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 03:12:09.295333  658811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-284604 embed-certs-284604
	I1124 03:12:09.341016  658811 network_create.go:108] docker network embed-certs-284604 192.168.94.0/24 created
	I1124 03:12:09.341044  658811 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-284604" container
	I1124 03:12:09.341097  658811 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:12:09.358710  658811 cli_runner.go:164] Run: docker volume create embed-certs-284604 --label name.minikube.sigs.k8s.io=embed-certs-284604 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:12:09.377491  658811 oci.go:103] Successfully created a docker volume embed-certs-284604
	I1124 03:12:09.377565  658811 cli_runner.go:164] Run: docker run --rm --name embed-certs-284604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-284604 --entrypoint /usr/bin/test -v embed-certs-284604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:12:09.757637  658811 oci.go:107] Successfully prepared a docker volume embed-certs-284604
	I1124 03:12:09.757726  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:09.757742  658811 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:12:09.757816  658811 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-284604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:12:13.055592  658811 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-284604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (3.297719307s)
	I1124 03:12:13.055632  658811 kic.go:203] duration metric: took 3.29788472s to extract preloaded images to volume ...
	W1124 03:12:13.055721  658811 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:12:13.055758  658811 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:12:13.055810  658811 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:12:13.124836  658811 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-284604 --name embed-certs-284604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-284604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-284604 --network embed-certs-284604 --ip 192.168.94.2 --volume embed-certs-284604:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:12:13.468642  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Running}}
	I1124 03:12:13.493010  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.520114  658811 cli_runner.go:164] Run: docker exec embed-certs-284604 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:12:13.579438  658811 oci.go:144] the created container "embed-certs-284604" has a running status.
	I1124 03:12:13.579473  658811 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa...
	I1124 03:12:13.686392  658811 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:12:13.719014  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.744934  658811 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:12:13.744979  658811 kic_runner.go:114] Args: [docker exec --privileged embed-certs-284604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:12:13.804379  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.833184  658811 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:13.833391  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:13.865266  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:13.865635  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:13.865670  658811 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:13.866448  658811 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55158->127.0.0.1:33498: read: connection reset by peer
	I1124 03:12:13.290552  656542 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:13.314170  656542 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:13.318716  656542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:13.333300  656542 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:13.333436  656542 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:13.333523  656542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:13.375001  656542 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:13.375027  656542 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:12:13.375078  656542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:13.407152  656542 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:13.407180  656542 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:13.407190  656542 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 03:12:13.407342  656542 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:13.407444  656542 ssh_runner.go:195] Run: crio config
	I1124 03:12:13.468159  656542 cni.go:84] Creating CNI manager for ""
	I1124 03:12:13.468191  656542 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:13.468220  656542 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:13.468251  656542 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993813 NodeName:default-k8s-diff-port-993813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:13.468425  656542 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993813"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:13.468485  656542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:13.480922  656542 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:13.480989  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:13.491437  656542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 03:12:13.510538  656542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:13.531599  656542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 03:12:13.550625  656542 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:13.557123  656542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:13.570105  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:13.687069  656542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:13.711246  656542 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813 for IP: 192.168.76.2
	I1124 03:12:13.711268  656542 certs.go:195] generating shared ca certs ...
	I1124 03:12:13.711287  656542 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:13.711456  656542 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:13.711513  656542 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:13.711526  656542 certs.go:257] generating profile certs ...
	I1124 03:12:13.711642  656542 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key
	I1124 03:12:13.711706  656542 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619
	I1124 03:12:13.711753  656542 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key
	I1124 03:12:13.711996  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:13.712051  656542 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:13.712065  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:13.712101  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:13.712139  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:13.712175  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:13.712240  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:13.712851  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:13.744604  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:13.773924  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:13.797454  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:13.831783  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 03:12:13.870484  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:13.900124  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:13.922822  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:12:13.948171  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:13.977351  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:14.003032  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:14.029032  656542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:14.044929  656542 ssh_runner.go:195] Run: openssl version
	I1124 03:12:14.055102  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:14.069569  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.074149  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.074206  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.129455  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:14.139467  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:14.150460  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.155547  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.155598  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.213122  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:14.224488  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:14.235043  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.239741  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.239796  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.296275  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:14.307247  656542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:14.315784  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:12:14.374911  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:12:14.452037  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:12:14.514532  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:12:14.577046  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:12:14.634822  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:12:14.697600  656542 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:14.697704  656542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:14.697759  656542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:14.736428  656542 cri.go:89] found id: "9d08a55f25f2dea3825f91aa9365cd9a3093b4582eda50f03b22c2ecc00f92c6"
	I1124 03:12:14.736451  656542 cri.go:89] found id: "a7d5f73dd018d0ee98a3ef524ea2c83739dc8c07b34a7298ffb1d288db659329"
	I1124 03:12:14.736458  656542 cri.go:89] found id: "dd990c6cdcef7e1e7305b9fc20b7615dfb761cbe8c5d42a1f61c8b41406cd0a7"
	I1124 03:12:14.736462  656542 cri.go:89] found id: "11357ba44da7473554ad2b8f1e58b742de06b9155b164b6c83a5d2f9beb7830e"
	I1124 03:12:14.736466  656542 cri.go:89] found id: ""
	I1124 03:12:14.736511  656542 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:12:14.754070  656542 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:14Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:14.754156  656542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:14.765200  656542 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:12:14.765224  656542 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:12:14.765273  656542 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:12:14.773243  656542 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:12:14.773947  656542 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993813" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:14.774328  656542 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993813" cluster setting kubeconfig missing "default-k8s-diff-port-993813" context setting]
	I1124 03:12:14.774925  656542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.776519  656542 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:12:14.785657  656542 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 03:12:14.785687  656542 kubeadm.go:602] duration metric: took 20.455875ms to restartPrimaryControlPlane
	I1124 03:12:14.785704  656542 kubeadm.go:403] duration metric: took 88.114399ms to StartCluster
	I1124 03:12:14.785722  656542 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.785796  656542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:14.786941  656542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.787180  656542 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:14.787429  656542 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:14.787487  656542 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:14.787568  656542 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.787584  656542 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.787592  656542 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:12:14.787615  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.788183  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.788464  656542 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.788516  656542 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993813"
	I1124 03:12:14.788466  656542 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.788738  656542 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.788750  656542 addons.go:248] addon dashboard should already be in state true
	I1124 03:12:14.788782  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.789431  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.789731  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.792034  656542 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:14.793166  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:14.820828  656542 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:12:14.821632  656542 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.821655  656542 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:12:14.821731  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.821909  656542 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:12:14.822084  656542 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:14.822112  656542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:14.822188  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.822548  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.827335  656542 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:12:13.173638  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:13.173665  657716 machine.go:97] duration metric: took 6.241868553s to provisionDockerMachine
	I1124 03:12:13.173679  657716 start.go:293] postStartSetup for "no-preload-603010" (driver="docker")
	I1124 03:12:13.173692  657716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:13.173754  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:13.173803  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.199819  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.311414  657716 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:13.316263  657716 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:13.316292  657716 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:13.316304  657716 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:13.316362  657716 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:13.316451  657716 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:13.316564  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:13.330333  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:13.349678  657716 start.go:296] duration metric: took 175.98281ms for postStartSetup
	I1124 03:12:13.349757  657716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:13.349803  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.372668  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.477580  657716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:13.483572  657716 fix.go:56] duration metric: took 6.891356705s for fixHost
	I1124 03:12:13.483602  657716 start.go:83] releasing machines lock for "no-preload-603010", held for 6.891418388s
	I1124 03:12:13.483679  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:13.509057  657716 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:13.509123  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.509169  657716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:13.509281  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.533830  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.535423  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.716640  657716 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:13.727633  657716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:13.784701  657716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:13.789877  657716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:13.789964  657716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:13.799956  657716 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:12:13.799989  657716 start.go:496] detecting cgroup driver to use...
	I1124 03:12:13.800021  657716 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:13.800080  657716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:13.821650  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:13.845364  657716 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:13.845437  657716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:13.876223  657716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:13.896810  657716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:14.018144  657716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:14.133192  657716 docker.go:234] disabling docker service ...
	I1124 03:12:14.133276  657716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:14.151812  657716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:14.167561  657716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:14.282838  657716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:14.401610  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:14.417930  657716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:14.437107  657716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:14.437170  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.449631  657716 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:14.449698  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.462463  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.477641  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.490417  657716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:14.504273  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.516484  657716 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.526509  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.538280  657716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:14.546998  657716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:14.555574  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:14.685636  657716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:14.944749  657716 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:14.944917  657716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:14.950036  657716 start.go:564] Will wait 60s for crictl version
	I1124 03:12:14.950115  657716 ssh_runner.go:195] Run: which crictl
	I1124 03:12:14.954328  657716 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:14.985292  657716 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:14.985374  657716 ssh_runner.go:195] Run: crio --version
	I1124 03:12:15.030503  657716 ssh_runner.go:195] Run: crio --version
	I1124 03:12:15.075694  657716 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:12:15.076822  657716 cli_runner.go:164] Run: docker network inspect no-preload-603010 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:15.102488  657716 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:15.108702  657716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:15.124431  657716 kubeadm.go:884] updating cluster {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:15.124588  657716 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:15.124636  657716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:15.167486  657716 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:15.167521  657716 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:15.167539  657716 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 03:12:15.167821  657716 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-603010 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:15.167925  657716 ssh_runner.go:195] Run: crio config
	I1124 03:12:15.235069  657716 cni.go:84] Creating CNI manager for ""
	I1124 03:12:15.235092  657716 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:15.235110  657716 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:15.235137  657716 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603010 NodeName:no-preload-603010 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:15.235315  657716 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603010"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:15.235402  657716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:15.246426  657716 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:15.246486  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:15.255073  657716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:12:15.274174  657716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:15.291964  657716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1124 03:12:15.310704  657716 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:15.315241  657716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:15.329049  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:15.444004  657716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:15.468249  657716 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010 for IP: 192.168.85.2
	I1124 03:12:15.468275  657716 certs.go:195] generating shared ca certs ...
	I1124 03:12:15.468303  657716 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:15.468461  657716 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:15.468527  657716 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:15.468545  657716 certs.go:257] generating profile certs ...
	I1124 03:12:15.468671  657716 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key
	I1124 03:12:15.468756  657716 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738
	I1124 03:12:15.468820  657716 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key
	I1124 03:12:15.469056  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:15.469155  657716 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:15.469190  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:15.469235  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:15.469307  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:15.469360  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:15.469452  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:15.470423  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:15.492954  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:15.516840  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:15.539720  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:15.572434  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:12:15.602383  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:15.627969  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:15.650700  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:12:15.671263  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:15.692710  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:15.715510  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:15.740163  657716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:15.756242  657716 ssh_runner.go:195] Run: openssl version
	I1124 03:12:15.764455  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:15.774930  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.779615  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.779675  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.837760  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:15.848860  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:15.859402  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.864242  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.864304  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.923088  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:15.933908  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:15.944242  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:15.949198  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:15.949248  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:16.007273  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:16.018117  657716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:16.023108  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:12:16.086212  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:12:16.144287  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:12:16.203439  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:12:16.267980  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:12:16.329154  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:12:16.391972  657716 kubeadm.go:401] StartCluster: {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:16.392083  657716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:16.392153  657716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:16.431895  657716 cri.go:89] found id: "3dc1c0625a30c329549c40b7e0c43d1ab4d818ccc0fb85ea668066af23a327d3"
	I1124 03:12:16.431924  657716 cri.go:89] found id: "4e8e84f339bed2139577a34b2e6715ab1a8d9a3b425a16886245d61781115cf0"
	I1124 03:12:16.431930  657716 cri.go:89] found id: "7294ac6825ca8afcd936803374aa461bcb4d637ea8743600784037c5acb225b2"
	I1124 03:12:16.431934  657716 cri.go:89] found id: "767a9908e75937581bb6f9fd527e760a401658cde3d3cf28bc2b66613eab1027"
	I1124 03:12:16.431938  657716 cri.go:89] found id: ""
	I1124 03:12:16.431989  657716 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:12:16.448469  657716 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:16Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:16.448636  657716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:16.460046  657716 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:12:16.460066  657716 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:12:16.460159  657716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:12:16.470578  657716 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:12:16.472039  657716 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-603010" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:16.472691  657716 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-603010" cluster setting kubeconfig missing "no-preload-603010" context setting]
	I1124 03:12:16.473827  657716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.476388  657716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:12:16.491280  657716 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 03:12:16.491307  657716 kubeadm.go:602] duration metric: took 31.234841ms to restartPrimaryControlPlane
	I1124 03:12:16.491317  657716 kubeadm.go:403] duration metric: took 99.357197ms to StartCluster
	I1124 03:12:16.491333  657716 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.491393  657716 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:16.492731  657716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.492990  657716 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:16.493291  657716 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:16.493352  657716 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:16.493441  657716 addons.go:70] Setting storage-provisioner=true in profile "no-preload-603010"
	I1124 03:12:16.493465  657716 addons.go:239] Setting addon storage-provisioner=true in "no-preload-603010"
	W1124 03:12:16.493473  657716 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:12:16.493503  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.494027  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.494266  657716 addons.go:70] Setting dashboard=true in profile "no-preload-603010"
	I1124 03:12:16.494322  657716 addons.go:239] Setting addon dashboard=true in "no-preload-603010"
	I1124 03:12:16.494338  657716 addons.go:70] Setting default-storageclass=true in profile "no-preload-603010"
	I1124 03:12:16.494434  657716 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603010"
	W1124 03:12:16.494361  657716 addons.go:248] addon dashboard should already be in state true
	I1124 03:12:16.494570  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.494863  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.495005  657716 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:16.495647  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.496468  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:16.527269  657716 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:12:16.528480  657716 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:12:16.528517  657716 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1124 03:12:14.168310  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:16.172923  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:18.176795  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:14.828319  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:12:14.828372  656542 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:12:14.828432  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.858092  656542 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:14.858118  656542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:14.858192  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.865650  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.866433  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.895242  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.975501  656542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:14.992389  656542 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:12:15.008151  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:15.016186  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:12:15.016211  656542 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:12:15.031574  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:15.042522  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:12:15.042540  656542 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:12:15.074331  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:12:15.074365  656542 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:12:15.109090  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:12:15.109113  656542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:12:15.128161  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:12:15.128184  656542 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:12:15.147874  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:12:15.147903  656542 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:12:15.168191  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:12:15.168211  656542 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:12:15.185637  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:12:15.185661  656542 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:12:15.202994  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:15.203016  656542 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:12:15.221608  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:17.996962  656542 node_ready.go:49] node "default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:17.997067  656542 node_ready.go:38] duration metric: took 3.004589581s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:12:17.997096  656542 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:17.997184  656542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:18.834613  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.826385361s)
	I1124 03:12:18.834690  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.803092411s)
	I1124 03:12:18.834853  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.613213665s)
	I1124 03:12:18.834988  656542 api_server.go:72] duration metric: took 4.047778988s to wait for apiserver process to appear ...
	I1124 03:12:18.835771  656542 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:18.835800  656542 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:12:18.838614  656542 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993813 addons enable metrics-server
	
	I1124 03:12:18.844882  656542 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 03:12:17.043130  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:12:17.043165  658811 ubuntu.go:182] provisioning hostname "embed-certs-284604"
	I1124 03:12:17.043247  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.069679  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:17.070109  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:17.070142  658811 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-284604 && echo "embed-certs-284604" | sudo tee /etc/hostname
	I1124 03:12:17.259114  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:12:17.259199  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.284082  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:17.284399  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:17.284433  658811 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-284604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-284604/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-284604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:17.452374  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:17.452411  658811 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:17.452438  658811 ubuntu.go:190] setting up certificates
	I1124 03:12:17.452452  658811 provision.go:84] configureAuth start
	I1124 03:12:17.452521  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:17.483434  658811 provision.go:143] copyHostCerts
	I1124 03:12:17.483502  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:17.483519  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:17.483580  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:17.483712  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:17.483725  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:17.483764  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:17.483851  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:17.483858  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:17.483909  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:17.483990  658811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.embed-certs-284604 san=[127.0.0.1 192.168.94.2 embed-certs-284604 localhost minikube]
	I1124 03:12:17.911206  658811 provision.go:177] copyRemoteCerts
	I1124 03:12:17.911335  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:17.911394  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.943914  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.069938  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:18.098447  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:12:18.124997  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:12:18.162531  658811 provision.go:87] duration metric: took 710.055135ms to configureAuth
	I1124 03:12:18.162560  658811 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:18.162764  658811 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:18.162877  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.187248  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:18.187553  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:18.187575  658811 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:18.557227  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:18.557257  658811 machine.go:97] duration metric: took 4.723983027s to provisionDockerMachine
	I1124 03:12:18.557270  658811 client.go:176] duration metric: took 9.315155053s to LocalClient.Create
	I1124 03:12:18.557286  658811 start.go:167] duration metric: took 9.315214435s to libmachine.API.Create "embed-certs-284604"
	I1124 03:12:18.557298  658811 start.go:293] postStartSetup for "embed-certs-284604" (driver="docker")
	I1124 03:12:18.557310  658811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:18.557379  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:18.557432  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.587404  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.715877  658811 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:18.721275  658811 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:18.721309  658811 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:18.721322  658811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:18.721381  658811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:18.721473  658811 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:18.721597  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:18.732645  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:18.763370  658811 start.go:296] duration metric: took 206.056597ms for postStartSetup
	I1124 03:12:18.763732  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:18.791899  658811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:12:18.792183  658811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:18.792233  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.820806  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.936530  658811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:18.948570  658811 start.go:128] duration metric: took 9.708372989s to createHost
	I1124 03:12:18.948686  658811 start.go:83] releasing machines lock for "embed-certs-284604", held for 9.708587492s
	I1124 03:12:18.948771  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:18.973190  658811 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:18.973375  658811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:18.973512  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.973582  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.998620  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.999698  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.845938  656542 addons.go:530] duration metric: took 4.058450553s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 03:12:18.846295  656542 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:18.846717  656542 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:12:19.335969  656542 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:12:19.342155  656542 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1124 03:12:19.343392  656542 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:19.343421  656542 api_server.go:131] duration metric: took 507.639836ms to wait for apiserver health ...
	I1124 03:12:19.343433  656542 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:19.347170  656542 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:19.347220  656542 system_pods.go:61] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:19.347233  656542 system_pods.go:61] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:19.347244  656542 system_pods.go:61] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:19.347253  656542 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:19.347263  656542 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:19.347271  656542 system_pods.go:61] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:19.347279  656542 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:19.347290  656542 system_pods.go:61] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:19.347300  656542 system_pods.go:74] duration metric: took 3.857291ms to wait for pod list to return data ...
	I1124 03:12:19.347309  656542 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:19.350005  656542 default_sa.go:45] found service account: "default"
	I1124 03:12:19.350027  656542 default_sa.go:55] duration metric: took 2.709767ms for default service account to be created ...
	I1124 03:12:19.350036  656542 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:19.354450  656542 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:19.354480  656542 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:19.354492  656542 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:19.354502  656542 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:19.354512  656542 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:19.354525  656542 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:19.354534  656542 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:19.354542  656542 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:19.354550  656542 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:19.354560  656542 system_pods.go:126] duration metric: took 4.516416ms to wait for k8s-apps to be running ...
	I1124 03:12:19.354569  656542 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:19.354617  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:19.377699  656542 system_svc.go:56] duration metric: took 23.119925ms WaitForService to wait for kubelet
	I1124 03:12:19.377726  656542 kubeadm.go:587] duration metric: took 4.590516557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:19.377808  656542 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:19.381785  656542 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:19.381815  656542 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:19.381831  656542 node_conditions.go:105] duration metric: took 4.017737ms to run NodePressure ...
	I1124 03:12:19.381846  656542 start.go:242] waiting for startup goroutines ...
	I1124 03:12:19.381857  656542 start.go:247] waiting for cluster config update ...
	I1124 03:12:19.381883  656542 start.go:256] writing updated cluster config ...
	I1124 03:12:19.382229  656542 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:19.387932  656542 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:19.394333  656542 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:16.529636  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:12:16.529826  657716 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:12:16.529877  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.529719  657716 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:16.530024  657716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:16.530070  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.534729  657716 addons.go:239] Setting addon default-storageclass=true in "no-preload-603010"
	W1124 03:12:16.534754  657716 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:12:16.534783  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.539339  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.565768  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.582397  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.585042  657716 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:16.585070  657716 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:16.585126  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.617946  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.706410  657716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:16.731745  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:12:16.731773  657716 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:12:16.736337  657716 node_ready.go:35] waiting up to 6m0s for node "no-preload-603010" to be "Ready" ...
	I1124 03:12:16.736937  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:16.758823  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:12:16.758847  657716 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:12:16.768684  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:16.788344  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:12:16.788369  657716 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:12:16.806593  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:12:16.806620  657716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:12:16.847576  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:12:16.847609  657716 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:12:16.867721  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:12:16.867755  657716 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:12:16.886765  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:12:16.886787  657716 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:12:16.907569  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:12:16.907732  657716 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:12:16.929396  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:16.929417  657716 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:12:16.958374  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:19.957067  657716 node_ready.go:49] node "no-preload-603010" is "Ready"
	I1124 03:12:19.957111  657716 node_ready.go:38] duration metric: took 3.220732108s for node "no-preload-603010" to be "Ready" ...
	I1124 03:12:19.957131  657716 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:19.957256  657716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:20.880814  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.143842388s)
	I1124 03:12:20.881241  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.112181993s)
	I1124 03:12:21.157660  657716 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.200376454s)
	I1124 03:12:21.157703  657716 api_server.go:72] duration metric: took 4.664681444s to wait for apiserver process to appear ...
	I1124 03:12:21.157713  657716 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:21.157733  657716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:12:21.158403  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.199980339s)
	I1124 03:12:21.160177  657716 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-603010 addons enable metrics-server
	
	I1124 03:12:21.161363  657716 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 03:12:19.120481  658811 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:19.211741  658811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:19.277394  658811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:19.284078  658811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:19.284149  658811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:19.319995  658811 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:12:19.320028  658811 start.go:496] detecting cgroup driver to use...
	I1124 03:12:19.320064  658811 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:19.320117  658811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:19.345823  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:19.367716  658811 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:19.367782  658811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:19.389799  658811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:19.412438  658811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:19.524730  658811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:19.637210  658811 docker.go:234] disabling docker service ...
	I1124 03:12:19.637286  658811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:19.659861  658811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:19.677152  658811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:19.823448  658811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:19.960707  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:19.981616  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:20.012418  658811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:20.012486  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.058077  658811 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:20.058214  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.074742  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.118587  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.135044  658811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:20.151861  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.172656  658811 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.194765  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.232792  658811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:20.242855  658811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:20.253417  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:20.371692  658811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:21.221343  658811 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:21.221440  658811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:21.226905  658811 start.go:564] Will wait 60s for crictl version
	I1124 03:12:21.227016  658811 ssh_runner.go:195] Run: which crictl
	I1124 03:12:21.231693  658811 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:21.262514  658811 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:21.262603  658811 ssh_runner.go:195] Run: crio --version
	I1124 03:12:21.302192  658811 ssh_runner.go:195] Run: crio --version
	I1124 03:12:21.363037  658811 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:12:21.162777  657716 addons.go:530] duration metric: took 4.669427095s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 03:12:21.163688  657716 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:21.163718  657716 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:20.668896  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:23.167980  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:21.364543  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:21.388019  658811 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:21.393290  658811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:21.406629  658811 kubeadm.go:884] updating cluster {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:21.406778  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:21.406846  658811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:21.445258  658811 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:21.445284  658811 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:12:21.445336  658811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:21.471000  658811 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:21.471025  658811 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:21.471037  658811 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:12:21.471125  658811 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-284604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:21.471186  658811 ssh_runner.go:195] Run: crio config
	I1124 03:12:21.516457  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:21.516480  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:21.516502  658811 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:21.516532  658811 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-284604 NodeName:embed-certs-284604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:21.516680  658811 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-284604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:21.516751  658811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:21.524967  658811 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:21.525035  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:21.533487  658811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 03:12:21.547228  658811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:21.640415  658811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 03:12:21.656434  658811 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:21.660696  658811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:21.674410  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:21.772584  658811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:21.798340  658811 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604 for IP: 192.168.94.2
	I1124 03:12:21.798360  658811 certs.go:195] generating shared ca certs ...
	I1124 03:12:21.798381  658811 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.798539  658811 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:21.798593  658811 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:21.798607  658811 certs.go:257] generating profile certs ...
	I1124 03:12:21.798690  658811 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key
	I1124 03:12:21.798708  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt with IP's: []
	I1124 03:12:21.837756  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt ...
	I1124 03:12:21.837790  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt: {Name:mk6d8aec213556beda470e3e5188eed1aec5e183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.838000  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key ...
	I1124 03:12:21.838030  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key: {Name:mk56f44e1d331f82a560e15fe6a3c3ca4602bba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.838172  658811 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087
	I1124 03:12:21.838189  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 03:12:21.915471  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 ...
	I1124 03:12:21.915494  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087: {Name:mk185605a13bb00cdff0decbde0063003287a88f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.915630  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087 ...
	I1124 03:12:21.915643  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087: {Name:mk1404f69a73d575873220c9d20779709c9db66b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.915715  658811 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt
	I1124 03:12:21.915784  658811 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key
	I1124 03:12:21.915837  658811 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key
	I1124 03:12:21.915852  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt with IP's: []
	I1124 03:12:22.064876  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt ...
	I1124 03:12:22.064923  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt: {Name:mk7bbfb718db4eee243d6b6658f5b6db725b34b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:22.065108  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key ...
	I1124 03:12:22.065140  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key: {Name:mk282c31a6bdbd1f185d5fa986bb6679f789f94a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:22.065488  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:22.065564  658811 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:22.065576  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:22.065602  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:22.065630  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:22.065654  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:22.065702  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:22.066383  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:22.086471  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:22.103602  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:22.120085  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:22.137488  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:12:22.154084  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:22.171055  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:22.187877  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:12:22.204407  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:22.222560  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:22.241380  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:22.258066  658811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:22.269950  658811 ssh_runner.go:195] Run: openssl version
	I1124 03:12:22.276120  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:22.283870  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.287375  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.287414  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.321400  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:22.329479  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:22.338113  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.342815  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.342865  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.384524  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:22.393408  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:22.402946  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.406951  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.407009  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.445501  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:22.454521  658811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:22.458152  658811 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:12:22.458212  658811 kubeadm.go:401] StartCluster: {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:22.458278  658811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:22.458330  658811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:22.487574  658811 cri.go:89] found id: ""
	I1124 03:12:22.487653  658811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:22.495876  658811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:12:22.505058  658811 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:12:22.505121  658811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:12:22.515162  658811 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:12:22.515181  658811 kubeadm.go:158] found existing configuration files:
	
	I1124 03:12:22.515229  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:12:22.525864  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:12:22.525956  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:12:22.535632  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:12:22.545975  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:12:22.546068  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:12:22.556144  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:12:22.566062  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:12:22.566123  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:12:22.576364  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:12:22.587041  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:12:22.587089  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:12:22.596656  658811 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:12:22.678370  658811 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:12:22.762592  658811 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1124 03:12:21.400229  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:23.400859  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:21.658606  657716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:12:21.664294  657716 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:12:21.665654  657716 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:21.665685  657716 api_server.go:131] duration metric: took 507.965368ms to wait for apiserver health ...
	I1124 03:12:21.665696  657716 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:21.669523  657716 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:21.669569  657716 system_pods.go:61] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:21.669584  657716 system_pods.go:61] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:21.669600  657716 system_pods.go:61] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:21.669613  657716 system_pods.go:61] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:21.669620  657716 system_pods.go:61] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:21.669631  657716 system_pods.go:61] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:21.669640  657716 system_pods.go:61] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:21.669651  657716 system_pods.go:61] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:21.669661  657716 system_pods.go:74] duration metric: took 3.958242ms to wait for pod list to return data ...
	I1124 03:12:21.669744  657716 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:21.672641  657716 default_sa.go:45] found service account: "default"
	I1124 03:12:21.672665  657716 default_sa.go:55] duration metric: took 2.912794ms for default service account to be created ...
	I1124 03:12:21.672674  657716 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:21.676337  657716 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:21.676367  657716 system_pods.go:89] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:21.676379  657716 system_pods.go:89] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:21.676394  657716 system_pods.go:89] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:21.676403  657716 system_pods.go:89] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:21.676411  657716 system_pods.go:89] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:21.676422  657716 system_pods.go:89] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:21.676433  657716 system_pods.go:89] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:21.676441  657716 system_pods.go:89] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:21.676450  657716 system_pods.go:126] duration metric: took 3.770261ms to wait for k8s-apps to be running ...
	I1124 03:12:21.676459  657716 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:21.676504  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:21.690659  657716 system_svc.go:56] duration metric: took 14.192089ms WaitForService to wait for kubelet
	I1124 03:12:21.690686  657716 kubeadm.go:587] duration metric: took 5.197662584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:21.690707  657716 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:21.693136  657716 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:21.693164  657716 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:21.693184  657716 node_conditions.go:105] duration metric: took 2.469957ms to run NodePressure ...
	I1124 03:12:21.693203  657716 start.go:242] waiting for startup goroutines ...
	I1124 03:12:21.693215  657716 start.go:247] waiting for cluster config update ...
	I1124 03:12:21.693239  657716 start.go:256] writing updated cluster config ...
	I1124 03:12:21.693532  657716 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:21.697901  657716 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:21.701025  657716 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:12:23.706826  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:25.707596  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:25.168947  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:27.669069  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:25.402048  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:27.901054  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:27.707794  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:29.710379  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:29.675678  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:32.166267  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:34.784594  658811 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:12:34.784648  658811 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:12:34.784736  658811 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:12:34.784810  658811 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:12:34.784870  658811 kubeadm.go:319] OS: Linux
	I1124 03:12:34.784983  658811 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:12:34.785059  658811 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:12:34.785107  658811 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:12:34.785166  658811 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:12:34.785237  658811 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:12:34.785303  658811 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:12:34.785372  658811 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:12:34.785441  658811 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:12:34.785518  658811 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:12:34.785647  658811 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:12:34.785738  658811 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:12:34.785806  658811 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:12:34.786978  658811 out.go:252]   - Generating certificates and keys ...
	I1124 03:12:34.787057  658811 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:12:34.787166  658811 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:12:34.787260  658811 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:12:34.787314  658811 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:12:34.787380  658811 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:12:34.787463  658811 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:12:34.787510  658811 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:12:34.787654  658811 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-284604 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:12:34.787713  658811 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:12:34.787835  658811 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-284604 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:12:34.787929  658811 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:12:34.787996  658811 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:12:34.788075  658811 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:12:34.788161  658811 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:12:34.788246  658811 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:12:34.788307  658811 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:12:34.788377  658811 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:12:34.788464  658811 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:12:34.788510  658811 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:12:34.788574  658811 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:12:34.788677  658811 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:12:34.789842  658811 out.go:252]   - Booting up control plane ...
	I1124 03:12:34.789955  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:12:34.790029  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:12:34.790102  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:12:34.790202  658811 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:12:34.790286  658811 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:12:34.790369  658811 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:12:34.790438  658811 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:12:34.790470  658811 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:12:34.790573  658811 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:12:34.790662  658811 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:12:34.790715  658811 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001939634s
	I1124 03:12:34.790808  658811 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:12:34.790874  658811 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1124 03:12:34.790987  658811 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:12:34.791057  658811 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:12:34.791109  658811 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.83516238s
	I1124 03:12:34.791172  658811 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.120221493s
	I1124 03:12:34.791231  658811 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501624476s
	I1124 03:12:34.791319  658811 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:12:34.791443  658811 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:12:34.791516  658811 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:12:34.791778  658811 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-284604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:12:34.791865  658811 kubeadm.go:319] [bootstrap-token] Using token: 6opk0j.95uwfc60sd8szhpc
	I1124 03:12:34.793026  658811 out.go:252]   - Configuring RBAC rules ...
	I1124 03:12:34.793125  658811 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:12:34.793213  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:12:34.793344  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:12:34.793455  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:12:34.793557  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:12:34.793642  658811 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:12:34.793774  658811 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:12:34.793810  658811 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:12:34.793851  658811 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:12:34.793857  658811 kubeadm.go:319] 
	I1124 03:12:34.793964  658811 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:12:34.793973  658811 kubeadm.go:319] 
	I1124 03:12:34.794046  658811 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:12:34.794053  658811 kubeadm.go:319] 
	I1124 03:12:34.794074  658811 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:12:34.794151  658811 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:12:34.794229  658811 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:12:34.794239  658811 kubeadm.go:319] 
	I1124 03:12:34.794318  658811 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:12:34.794327  658811 kubeadm.go:319] 
	I1124 03:12:34.794375  658811 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:12:34.794381  658811 kubeadm.go:319] 
	I1124 03:12:34.794424  658811 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:12:34.794490  658811 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:12:34.794554  658811 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:12:34.794560  658811 kubeadm.go:319] 
	I1124 03:12:34.794633  658811 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:12:34.794705  658811 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:12:34.794712  658811 kubeadm.go:319] 
	I1124 03:12:34.794781  658811 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6opk0j.95uwfc60sd8szhpc \
	I1124 03:12:34.794955  658811 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:12:34.794990  658811 kubeadm.go:319] 	--control-plane 
	I1124 03:12:34.794996  658811 kubeadm.go:319] 
	I1124 03:12:34.795133  658811 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:12:34.795142  658811 kubeadm.go:319] 
	I1124 03:12:34.795208  658811 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6opk0j.95uwfc60sd8szhpc \
	I1124 03:12:34.795304  658811 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:12:34.795316  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:34.795322  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:34.796503  658811 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1124 03:12:29.901574  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:32.399665  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:32.206353  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:34.206828  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:34.667383  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:35.167626  650744 pod_ready.go:94] pod "coredns-5dd5756b68-5nwx9" is "Ready"
	I1124 03:12:35.167652  650744 pod_ready.go:86] duration metric: took 36.006547637s for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.170471  650744 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.174915  650744 pod_ready.go:94] pod "etcd-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.174952  650744 pod_ready.go:86] duration metric: took 4.460425ms for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.178276  650744 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.181797  650744 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.181815  650744 pod_ready.go:86] duration metric: took 3.521385ms for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.184086  650744 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.364640  650744 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.364666  650744 pod_ready.go:86] duration metric: took 180.561055ms for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.566321  650744 pod_ready.go:83] waiting for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.965760  650744 pod_ready.go:94] pod "kube-proxy-r82jh" is "Ready"
	I1124 03:12:35.965786  650744 pod_ready.go:86] duration metric: took 399.441601ms for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.166112  650744 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.564858  650744 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-579951" is "Ready"
	I1124 03:12:36.564911  650744 pod_ready.go:86] duration metric: took 398.774389ms for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.564927  650744 pod_ready.go:40] duration metric: took 37.40842222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:36.606666  650744 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 03:12:36.609650  650744 out.go:203] 
	W1124 03:12:36.610839  650744 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 03:12:36.611943  650744 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:12:36.613009  650744 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-579951" cluster and "default" namespace by default
	I1124 03:12:34.797545  658811 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:12:34.801904  658811 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:12:34.801919  658811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:12:34.815659  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:12:35.008985  658811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:12:35.009118  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-284604 minikube.k8s.io/updated_at=2025_11_24T03_12_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=embed-certs-284604 minikube.k8s.io/primary=true
	I1124 03:12:35.009137  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:35.019423  658811 ops.go:34] apiserver oom_adj: -16
	I1124 03:12:35.098937  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:35.600025  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:36.099882  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:36.599914  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:37.099714  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:37.599861  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:38.098989  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:38.599248  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.099379  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.599598  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.664570  658811 kubeadm.go:1114] duration metric: took 4.655535544s to wait for elevateKubeSystemPrivileges
	I1124 03:12:39.664621  658811 kubeadm.go:403] duration metric: took 17.206413974s to StartCluster
	I1124 03:12:39.664642  658811 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:39.664720  658811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:39.666858  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:39.667137  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:12:39.667148  658811 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:39.667230  658811 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:39.667331  658811 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-284604"
	I1124 03:12:39.667356  658811 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-284604"
	I1124 03:12:39.667360  658811 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:39.667396  658811 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:12:39.667427  658811 addons.go:70] Setting default-storageclass=true in profile "embed-certs-284604"
	I1124 03:12:39.667451  658811 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-284604"
	I1124 03:12:39.667810  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.667990  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.668614  658811 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:39.670239  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:39.693324  658811 addons.go:239] Setting addon default-storageclass=true in "embed-certs-284604"
	I1124 03:12:39.693377  658811 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:12:39.693617  658811 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 03:12:34.900232  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:36.901987  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:39.399311  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:39.693843  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.695301  658811 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:39.695324  658811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:39.695401  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:39.723273  658811 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:39.723298  658811 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:39.723378  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:39.730678  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:39.746663  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:39.790082  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:12:39.807223  658811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:39.854663  658811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:39.859938  658811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:39.988561  658811 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 03:12:39.990213  658811 node_ready.go:35] waiting up to 6m0s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:12:40.170444  658811 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1124 03:12:36.707151  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:39.206261  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:41.206507  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	I1124 03:12:40.171595  658811 addons.go:530] duration metric: took 504.363947ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:12:40.492653  658811 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-284604" context rescaled to 1 replicas
	W1124 03:12:41.992667  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:43.993353  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:41.399566  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:43.899302  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:43.705614  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:45.706618  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:45.993493  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:47.993708  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:46.399440  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:48.399607  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:48.205812  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:50.206724  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:50.493353  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	I1124 03:12:50.993323  658811 node_ready.go:49] node "embed-certs-284604" is "Ready"
	I1124 03:12:50.993350  658811 node_ready.go:38] duration metric: took 11.003110454s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:12:50.993367  658811 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:50.993411  658811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:51.005273  658811 api_server.go:72] duration metric: took 11.338089025s to wait for apiserver process to appear ...
	I1124 03:12:51.005299  658811 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:51.005319  658811 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:12:51.010460  658811 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:12:51.011346  658811 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:51.011367  658811 api_server.go:131] duration metric: took 6.06186ms to wait for apiserver health ...
	I1124 03:12:51.011376  658811 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:51.014056  658811 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:51.014084  658811 system_pods.go:61] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.014092  658811 system_pods.go:61] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.014101  658811 system_pods.go:61] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.014106  658811 system_pods.go:61] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.014113  658811 system_pods.go:61] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.014119  658811 system_pods.go:61] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.014136  658811 system_pods.go:61] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.014147  658811 system_pods.go:61] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.014155  658811 system_pods.go:74] duration metric: took 2.773001ms to wait for pod list to return data ...
	I1124 03:12:51.014164  658811 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:51.016349  658811 default_sa.go:45] found service account: "default"
	I1124 03:12:51.016366  658811 default_sa.go:55] duration metric: took 2.196577ms for default service account to be created ...
	I1124 03:12:51.016373  658811 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:51.018741  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.018763  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.018768  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.018774  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.018778  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.018783  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.018787  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.018791  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.018798  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.018817  658811 retry.go:31] will retry after 267.963041ms: missing components: kube-dns
	I1124 03:12:51.291183  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.291223  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.291231  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.291239  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.291244  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.291250  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.291255  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.291260  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.291268  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.291295  658811 retry.go:31] will retry after 316.287047ms: missing components: kube-dns
	I1124 03:12:51.610985  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.611019  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.611026  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.611037  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.611045  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.611055  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.611061  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.611066  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.611074  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.611098  658811 retry.go:31] will retry after 440.03042ms: missing components: kube-dns
	I1124 03:12:52.054793  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:52.054821  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:52.054826  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:52.054831  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:52.054835  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:52.054839  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:52.054842  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:52.054845  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:52.054850  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:52.054863  658811 retry.go:31] will retry after 498.386661ms: missing components: kube-dns
	I1124 03:12:52.557040  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:52.557071  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Running
	I1124 03:12:52.557079  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:52.557084  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:52.557089  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:52.557095  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:52.557100  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:52.557104  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:52.557110  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Running
	I1124 03:12:52.557120  658811 system_pods.go:126] duration metric: took 1.540739928s to wait for k8s-apps to be running ...
	I1124 03:12:52.557134  658811 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:52.557188  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:52.570482  658811 system_svc.go:56] duration metric: took 13.341226ms WaitForService to wait for kubelet
	I1124 03:12:52.570511  658811 kubeadm.go:587] duration metric: took 12.903331916s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:52.570535  658811 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:52.573089  658811 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:52.573117  658811 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:52.573148  658811 node_conditions.go:105] duration metric: took 2.605161ms to run NodePressure ...
	I1124 03:12:52.573166  658811 start.go:242] waiting for startup goroutines ...
	I1124 03:12:52.573175  658811 start.go:247] waiting for cluster config update ...
	I1124 03:12:52.573187  658811 start.go:256] writing updated cluster config ...
	I1124 03:12:52.573408  658811 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:52.576899  658811 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:52.580189  658811 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-89mzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.584242  658811 pod_ready.go:94] pod "coredns-66bc5c9577-89mzc" is "Ready"
	I1124 03:12:52.584262  658811 pod_ready.go:86] duration metric: took 4.045428ms for pod "coredns-66bc5c9577-89mzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.586066  658811 pod_ready.go:83] waiting for pod "etcd-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.590045  658811 pod_ready.go:94] pod "etcd-embed-certs-284604" is "Ready"
	I1124 03:12:52.590064  658811 pod_ready.go:86] duration metric: took 3.981268ms for pod "etcd-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.592126  658811 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.595532  658811 pod_ready.go:94] pod "kube-apiserver-embed-certs-284604" is "Ready"
	I1124 03:12:52.595555  658811 pod_ready.go:86] duration metric: took 3.408619ms for pod "kube-apiserver-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.597386  658811 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.980512  658811 pod_ready.go:94] pod "kube-controller-manager-embed-certs-284604" is "Ready"
	I1124 03:12:52.980538  658811 pod_ready.go:86] duration metric: took 383.129867ms for pod "kube-controller-manager-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.181479  658811 pod_ready.go:83] waiting for pod "kube-proxy-bn8fd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.581552  658811 pod_ready.go:94] pod "kube-proxy-bn8fd" is "Ready"
	I1124 03:12:53.581575  658811 pod_ready.go:86] duration metric: took 400.07394ms for pod "kube-proxy-bn8fd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.781409  658811 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.181669  658811 pod_ready.go:94] pod "kube-scheduler-embed-certs-284604" is "Ready"
	I1124 03:12:54.181696  658811 pod_ready.go:86] duration metric: took 400.263506ms for pod "kube-scheduler-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.181712  658811 pod_ready.go:40] duration metric: took 1.604781402s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:54.228480  658811 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:54.231260  658811 out.go:179] * Done! kubectl is now configured to use "embed-certs-284604" cluster and "default" namespace by default
	W1124 03:12:50.399926  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:52.400576  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:52.900171  656542 pod_ready.go:94] pod "coredns-66bc5c9577-w62hm" is "Ready"
	I1124 03:12:52.900193  656542 pod_ready.go:86] duration metric: took 33.505834176s for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.903110  656542 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.907513  656542 pod_ready.go:94] pod "etcd-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:52.907539  656542 pod_ready.go:86] duration metric: took 4.401311ms for pod "etcd-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.909400  656542 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.913156  656542 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:52.913178  656542 pod_ready.go:86] duration metric: took 3.755745ms for pod "kube-apiserver-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.914951  656542 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.098380  656542 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:53.098409  656542 pod_ready.go:86] duration metric: took 183.435612ms for pod "kube-controller-manager-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.298588  656542 pod_ready.go:83] waiting for pod "kube-proxy-xgjzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.698811  656542 pod_ready.go:94] pod "kube-proxy-xgjzs" is "Ready"
	I1124 03:12:53.698835  656542 pod_ready.go:86] duration metric: took 400.225655ms for pod "kube-proxy-xgjzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.898023  656542 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.299083  656542 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:54.299107  656542 pod_ready.go:86] duration metric: took 401.0576ms for pod "kube-scheduler-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.299119  656542 pod_ready.go:40] duration metric: took 34.911155437s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:54.345901  656542 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:54.347541  656542 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-993813" cluster and "default" namespace by default
	W1124 03:12:52.208247  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:54.707505  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	I1124 03:12:56.206822  657716 pod_ready.go:94] pod "coredns-66bc5c9577-9n5xf" is "Ready"
	I1124 03:12:56.206857  657716 pod_ready.go:86] duration metric: took 34.50580389s for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.209449  657716 pod_ready.go:83] waiting for pod "etcd-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.213288  657716 pod_ready.go:94] pod "etcd-no-preload-603010" is "Ready"
	I1124 03:12:56.213310  657716 pod_ready.go:86] duration metric: took 3.839555ms for pod "etcd-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.215450  657716 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.219181  657716 pod_ready.go:94] pod "kube-apiserver-no-preload-603010" is "Ready"
	I1124 03:12:56.219201  657716 pod_ready.go:86] duration metric: took 3.726981ms for pod "kube-apiserver-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.221198  657716 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.404873  657716 pod_ready.go:94] pod "kube-controller-manager-no-preload-603010" is "Ready"
	I1124 03:12:56.404930  657716 pod_ready.go:86] duration metric: took 183.709106ms for pod "kube-controller-manager-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.605567  657716 pod_ready.go:83] waiting for pod "kube-proxy-swj6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.005571  657716 pod_ready.go:94] pod "kube-proxy-swj6c" is "Ready"
	I1124 03:12:57.005598  657716 pod_ready.go:86] duration metric: took 400.0046ms for pod "kube-proxy-swj6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.205842  657716 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.605312  657716 pod_ready.go:94] pod "kube-scheduler-no-preload-603010" is "Ready"
	I1124 03:12:57.605336  657716 pod_ready.go:86] duration metric: took 399.465818ms for pod "kube-scheduler-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.605349  657716 pod_ready.go:40] duration metric: took 35.907419342s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:57.646839  657716 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:57.648681  657716 out.go:179] * Done! kubectl is now configured to use "no-preload-603010" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.613984866Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4456f3e7-d517-4e61-a09f-74e9fa6d7d66 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.615161698Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn/dashboard-metrics-scraper" id=9e6a4635-dc7f-46f8-8883-69b41e6c3a4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.615321114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.622367741Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.623098142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.65256355Z" level=info msg="Created container 2a433efd31d3741cd08e9d42cea0074c52a30b507785c1ebf2784f0471b4862c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn/dashboard-metrics-scraper" id=9e6a4635-dc7f-46f8-8883-69b41e6c3a4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.653216613Z" level=info msg="Starting container: 2a433efd31d3741cd08e9d42cea0074c52a30b507785c1ebf2784f0471b4862c" id=03fc8b25-840f-4739-b46e-d17f820c6995 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.655371066Z" level=info msg="Started container" PID=1733 containerID=2a433efd31d3741cd08e9d42cea0074c52a30b507785c1ebf2784f0471b4862c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn/dashboard-metrics-scraper id=03fc8b25-840f-4739-b46e-d17f820c6995 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48627d619904c30a6f412d1072a7c5ed911c07848137b64f48e6c3a4c488f8d1
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.778250175Z" level=info msg="Removing container: 33de6c303c7bcf87d793b286609d8ff37b137005d9e66912668b7f5ab22d4a1e" id=24115c28-af42-4be4-a391-de8341d52be2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.790511351Z" level=info msg="Removed container 33de6c303c7bcf87d793b286609d8ff37b137005d9e66912668b7f5ab22d4a1e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn/dashboard-metrics-scraper" id=24115c28-af42-4be4-a391-de8341d52be2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.638642158Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.644770588Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.644793532Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.644809743Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.652285018Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.652307627Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.652325258Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.658197571Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.658217717Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.658232766Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.663119394Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.663144221Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.663162319Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.668760814Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.668779201Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2a433efd31d37       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   48627d619904c       dashboard-metrics-scraper-6ffb444bf9-2j8cn   kubernetes-dashboard
	2a23c4740fd8a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   93d06eb70c021       storage-provisioner                          kube-system
	e35e4778c80df       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   a5a6763732c2c       kubernetes-dashboard-855c9754f9-sfsh5        kubernetes-dashboard
	05af58f5afef5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   9ce75a73ca5a2       busybox                                      default
	3d408a41820da       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   b2a5d8667c2cc       coredns-66bc5c9577-9n5xf                     kube-system
	0538658dae8ee       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   48ee827cfee63       kindnet-7gvgm                                kube-system
	ba401cc056a95       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   e46f4ece9ce0f       kube-proxy-swj6c                             kube-system
	3072e8ebabeb4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   93d06eb70c021       storage-provisioner                          kube-system
	3dc1c0625a30c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   6e3b084fcb2a4       kube-controller-manager-no-preload-603010    kube-system
	4e8e84f339bed       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   b64e9bd1664f4       etcd-no-preload-603010                       kube-system
	7294ac6825ca8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   bb95d9f85adce       kube-scheduler-no-preload-603010             kube-system
	767a9908e7593       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   d0553787479da       kube-apiserver-no-preload-603010             kube-system
	
	
	==> coredns [3d408a41820da3c6cec44b2639564b549a6b0a8af9e865107309ce3c569dd8b2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59195 - 55010 "HINFO IN 1408894213094709921.4509357215920153716. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.509455186s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-603010
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-603010
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=no-preload-603010
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_11_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:11:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-603010
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:13:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:12:50 +0000   Mon, 24 Nov 2025 03:11:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:12:50 +0000   Mon, 24 Nov 2025 03:11:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:12:50 +0000   Mon, 24 Nov 2025 03:11:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:12:50 +0000   Mon, 24 Nov 2025 03:11:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-603010
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                1b59d48b-7e38-42b7-9a74-cd736c856d5f
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-9n5xf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-no-preload-603010                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-7gvgm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-no-preload-603010              250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-no-preload-603010     200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-swj6c                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-no-preload-603010              100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2j8cn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-sfsh5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node no-preload-603010 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node no-preload-603010 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node no-preload-603010 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     115s               kubelet          Node no-preload-603010 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node no-preload-603010 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node no-preload-603010 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node no-preload-603010 event: Registered Node no-preload-603010 in Controller
	  Normal  NodeReady                96s                kubelet          Node no-preload-603010 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node no-preload-603010 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node no-preload-603010 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node no-preload-603010 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node no-preload-603010 event: Registered Node no-preload-603010 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [4e8e84f339bed2139577a34b2e6715ab1a8d9a3b425a16886245d61781115cf0] <==
	{"level":"warn","ts":"2025-11-24T03:12:19.085170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.092672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.103049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.115366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.131549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.138417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.151346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.160504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.175727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.182813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.191984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.201405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.237600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.244715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.262564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.269639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.287749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.292138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.310039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.379668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46178","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T03:12:20.539724Z","caller":"traceutil/trace.go:172","msg":"trace[149637593] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"148.113999ms","start":"2025-11-24T03:12:20.391595Z","end":"2025-11-24T03:12:20.539709Z","steps":["trace[149637593] 'process raft request'  (duration: 112.11995ms)","trace[149637593] 'compare'  (duration: 35.884371ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T03:12:20.539725Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.935509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" limit:1 ","response":"range_response_count:1 size:1137"}
	{"level":"info","ts":"2025-11-24T03:12:20.539790Z","caller":"traceutil/trace.go:172","msg":"trace[1011386953] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:1; response_revision:476; }","duration":"135.018825ms","start":"2025-11-24T03:12:20.404759Z","end":"2025-11-24T03:12:20.539777Z","steps":["trace[1011386953] 'agreement among raft nodes before linearized reading'  (duration: 98.901475ms)","trace[1011386953] 'range keys from in-memory index tree'  (duration: 35.938152ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:20.873490Z","caller":"traceutil/trace.go:172","msg":"trace[1619096431] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"158.121397ms","start":"2025-11-24T03:12:20.715348Z","end":"2025-11-24T03:12:20.873470Z","steps":["trace[1619096431] 'process raft request'  (duration: 70.181338ms)","trace[1619096431] 'compare'  (duration: 87.728884ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:20.873494Z","caller":"traceutil/trace.go:172","msg":"trace[918360513] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"154.944282ms","start":"2025-11-24T03:12:20.718524Z","end":"2025-11-24T03:12:20.873469Z","steps":["trace[918360513] 'process raft request'  (duration: 154.86791ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:13:12 up  1:55,  0 user,  load average: 4.55, 4.15, 2.73
	Linux no-preload-603010 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0538658dae8eeb1e72082ae5de429b78aaf9874931620b324b5b39bcd20d564e] <==
	I1124 03:12:21.438141       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:12:21.438414       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 03:12:21.438639       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:12:21.438662       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:12:21.438685       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:12:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:12:21.637882       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:12:21.637960       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:12:21.637972       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:12:21.735030       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 03:12:51.639025       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 03:12:51.639024       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 03:12:51.639027       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 03:12:51.639039       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1124 03:12:52.941229       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:12:52.941274       1 metrics.go:72] Registering metrics
	I1124 03:12:52.941387       1 controller.go:711] "Syncing nftables rules"
	I1124 03:13:01.638326       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:13:01.638405       1 main.go:301] handling current node
	I1124 03:13:11.646014       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:13:11.646061       1 main.go:301] handling current node
	
	
	==> kube-apiserver [767a9908e75937581bb6f9fd527e760a401658cde3d3cf28bc2b66613eab1027] <==
	I1124 03:12:19.995274       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 03:12:19.995648       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 03:12:19.995719       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 03:12:19.995728       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 03:12:19.997552       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 03:12:19.997764       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 03:12:19.997817       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 03:12:19.998803       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 03:12:20.001492       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 03:12:20.001546       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:12:20.011923       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 03:12:20.014456       1 policy_source.go:240] refreshing policies
	I1124 03:12:20.015462       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 03:12:20.067718       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:12:20.377482       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:12:20.612179       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:12:20.668057       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:12:20.714749       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:12:20.881605       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:12:21.012244       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:12:21.107002       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.105.77"}
	I1124 03:12:21.147442       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.165.131"}
	I1124 03:12:23.729659       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:12:23.779823       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:12:23.880590       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3dc1c0625a30c329549c40b7e0c43d1ab4d818ccc0fb85ea668066af23a327d3] <==
	I1124 03:12:23.320942       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:12:23.325395       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:12:23.325411       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:12:23.325422       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:12:23.325550       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 03:12:23.325818       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:12:23.325834       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:12:23.325952       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:12:23.325963       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:12:23.326461       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 03:12:23.326654       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:12:23.327067       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:12:23.327194       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:12:23.327241       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:12:23.330145       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:12:23.330987       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:12:23.332178       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:12:23.334506       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:12:23.334508       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 03:12:23.336698       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 03:12:23.338947       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 03:12:23.338961       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 03:12:23.339037       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 03:12:23.340210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 03:12:23.359511       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ba401cc056a953c5699c15cbf074185bee5218833058db0fed286d0270ae02ba] <==
	I1124 03:12:21.227389       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:12:21.301440       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:12:21.402435       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:12:21.402559       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 03:12:21.402732       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:12:21.426279       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:12:21.426404       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:12:21.431844       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:12:21.432292       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:12:21.432335       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:12:21.433861       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:12:21.433909       1 config.go:200] "Starting service config controller"
	I1124 03:12:21.433919       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:12:21.433923       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:12:21.433945       1 config.go:309] "Starting node config controller"
	I1124 03:12:21.433952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:12:21.433953       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:12:21.433959       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:12:21.534914       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:12:21.534934       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:12:21.534924       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:12:21.534981       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [7294ac6825ca8afcd936803374aa461bcb4d637ea8743600784037c5acb225b2] <==
	I1124 03:12:17.080470       1 serving.go:386] Generated self-signed cert in-memory
	W1124 03:12:19.973585       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 03:12:19.973626       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W1124 03:12:19.973642       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 03:12:19.973651       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 03:12:20.014247       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 03:12:20.014284       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:12:20.018512       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 03:12:20.019133       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:12:20.019212       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:12:20.019271       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:12:20.119768       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:12:23 no-preload-603010 kubelet[718]: I1124 03:12:23.986006     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/759895bc-23f3-4a43-b1a5-2a34cb7593bc-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-2j8cn\" (UID: \"759895bc-23f3-4a43-b1a5-2a34cb7593bc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn"
	Nov 24 03:12:23 no-preload-603010 kubelet[718]: I1124 03:12:23.986058     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4cqh\" (UniqueName: \"kubernetes.io/projected/4271eb57-8093-4453-8aad-0faa0f0d1c1e-kube-api-access-c4cqh\") pod \"kubernetes-dashboard-855c9754f9-sfsh5\" (UID: \"4271eb57-8093-4453-8aad-0faa0f0d1c1e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sfsh5"
	Nov 24 03:12:23 no-preload-603010 kubelet[718]: I1124 03:12:23.986182     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4271eb57-8093-4453-8aad-0faa0f0d1c1e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-sfsh5\" (UID: \"4271eb57-8093-4453-8aad-0faa0f0d1c1e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sfsh5"
	Nov 24 03:12:23 no-preload-603010 kubelet[718]: I1124 03:12:23.986249     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vxch\" (UniqueName: \"kubernetes.io/projected/759895bc-23f3-4a43-b1a5-2a34cb7593bc-kube-api-access-4vxch\") pod \"dashboard-metrics-scraper-6ffb444bf9-2j8cn\" (UID: \"759895bc-23f3-4a43-b1a5-2a34cb7593bc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn"
	Nov 24 03:12:25 no-preload-603010 kubelet[718]: I1124 03:12:25.738115     718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 03:12:31 no-preload-603010 kubelet[718]: I1124 03:12:31.489969     718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sfsh5" podStartSLOduration=3.366297844 podStartE2EDuration="8.48987631s" podCreationTimestamp="2025-11-24 03:12:23 +0000 UTC" firstStartedPulling="2025-11-24 03:12:24.200304259 +0000 UTC m=+8.729436688" lastFinishedPulling="2025-11-24 03:12:29.323882724 +0000 UTC m=+13.853015154" observedRunningTime="2025-11-24 03:12:29.736222431 +0000 UTC m=+14.265354896" watchObservedRunningTime="2025-11-24 03:12:31.48987631 +0000 UTC m=+16.019008751"
	Nov 24 03:12:32 no-preload-603010 kubelet[718]: I1124 03:12:32.724089     718 scope.go:117] "RemoveContainer" containerID="d37d22bd32705cdf7290134d2fef83db23d75f3fdd279150bb28ca47b472c963"
	Nov 24 03:12:33 no-preload-603010 kubelet[718]: I1124 03:12:33.727773     718 scope.go:117] "RemoveContainer" containerID="d37d22bd32705cdf7290134d2fef83db23d75f3fdd279150bb28ca47b472c963"
	Nov 24 03:12:33 no-preload-603010 kubelet[718]: I1124 03:12:33.727976     718 scope.go:117] "RemoveContainer" containerID="33de6c303c7bcf87d793b286609d8ff37b137005d9e66912668b7f5ab22d4a1e"
	Nov 24 03:12:33 no-preload-603010 kubelet[718]: E1124 03:12:33.728179     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2j8cn_kubernetes-dashboard(759895bc-23f3-4a43-b1a5-2a34cb7593bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn" podUID="759895bc-23f3-4a43-b1a5-2a34cb7593bc"
	Nov 24 03:12:34 no-preload-603010 kubelet[718]: I1124 03:12:34.731327     718 scope.go:117] "RemoveContainer" containerID="33de6c303c7bcf87d793b286609d8ff37b137005d9e66912668b7f5ab22d4a1e"
	Nov 24 03:12:34 no-preload-603010 kubelet[718]: E1124 03:12:34.731535     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2j8cn_kubernetes-dashboard(759895bc-23f3-4a43-b1a5-2a34cb7593bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn" podUID="759895bc-23f3-4a43-b1a5-2a34cb7593bc"
	Nov 24 03:12:40 no-preload-603010 kubelet[718]: I1124 03:12:40.793558     718 scope.go:117] "RemoveContainer" containerID="33de6c303c7bcf87d793b286609d8ff37b137005d9e66912668b7f5ab22d4a1e"
	Nov 24 03:12:40 no-preload-603010 kubelet[718]: E1124 03:12:40.793705     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2j8cn_kubernetes-dashboard(759895bc-23f3-4a43-b1a5-2a34cb7593bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn" podUID="759895bc-23f3-4a43-b1a5-2a34cb7593bc"
	Nov 24 03:12:51 no-preload-603010 kubelet[718]: I1124 03:12:51.771946     718 scope.go:117] "RemoveContainer" containerID="3072e8ebabeb4373de4efeab47db549507d3ee4e0654e8677138ab8f8c18ece3"
	Nov 24 03:12:52 no-preload-603010 kubelet[718]: I1124 03:12:52.612094     718 scope.go:117] "RemoveContainer" containerID="33de6c303c7bcf87d793b286609d8ff37b137005d9e66912668b7f5ab22d4a1e"
	Nov 24 03:12:52 no-preload-603010 kubelet[718]: I1124 03:12:52.776915     718 scope.go:117] "RemoveContainer" containerID="33de6c303c7bcf87d793b286609d8ff37b137005d9e66912668b7f5ab22d4a1e"
	Nov 24 03:12:52 no-preload-603010 kubelet[718]: I1124 03:12:52.777246     718 scope.go:117] "RemoveContainer" containerID="2a433efd31d3741cd08e9d42cea0074c52a30b507785c1ebf2784f0471b4862c"
	Nov 24 03:12:52 no-preload-603010 kubelet[718]: E1124 03:12:52.777502     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2j8cn_kubernetes-dashboard(759895bc-23f3-4a43-b1a5-2a34cb7593bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn" podUID="759895bc-23f3-4a43-b1a5-2a34cb7593bc"
	Nov 24 03:13:00 no-preload-603010 kubelet[718]: I1124 03:13:00.792998     718 scope.go:117] "RemoveContainer" containerID="2a433efd31d3741cd08e9d42cea0074c52a30b507785c1ebf2784f0471b4862c"
	Nov 24 03:13:00 no-preload-603010 kubelet[718]: E1124 03:13:00.793151     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2j8cn_kubernetes-dashboard(759895bc-23f3-4a43-b1a5-2a34cb7593bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn" podUID="759895bc-23f3-4a43-b1a5-2a34cb7593bc"
	Nov 24 03:13:09 no-preload-603010 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 03:13:09 no-preload-603010 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 03:13:09 no-preload-603010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 03:13:09 no-preload-603010 systemd[1]: kubelet.service: Consumed 1.568s CPU time.
	
	
	==> kubernetes-dashboard [e35e4778c80df433ced61266b491a3bff7391fc67271709f5ef3f7509c962a42] <==
	2025/11/24 03:12:29 Using namespace: kubernetes-dashboard
	2025/11/24 03:12:29 Using in-cluster config to connect to apiserver
	2025/11/24 03:12:29 Using secret token for csrf signing
	2025/11/24 03:12:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 03:12:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 03:12:29 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 03:12:29 Generating JWE encryption key
	2025/11/24 03:12:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 03:12:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 03:12:30 Initializing JWE encryption key from synchronized object
	2025/11/24 03:12:30 Creating in-cluster Sidecar client
	2025/11/24 03:12:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 03:12:30 Serving insecurely on HTTP port: 9090
	2025/11/24 03:13:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 03:12:29 Starting overwatch
	
	
	==> storage-provisioner [2a23c4740fd8a0b86f68bdde06ff7fc26aef5bd492c29ae3555a8b8bd1103d39] <==
	I1124 03:12:51.820815       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:12:51.828620       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:12:51.828667       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:12:51.831063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:55.285686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:59.545387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:03.143498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:06.197193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:09.219413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:09.223647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:13:09.223811       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:13:09.223934       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-603010_d585a338-9708-4771-a89c-fcd3d1b04230!
	I1124 03:13:09.223927       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5b777d2-712b-44e4-a3bf-a14213c57432", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-603010_d585a338-9708-4771-a89c-fcd3d1b04230 became leader
	W1124 03:13:09.225846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:09.228570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:13:09.324240       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-603010_d585a338-9708-4771-a89c-fcd3d1b04230!
	W1124 03:13:11.231500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:11.236133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [3072e8ebabeb4373de4efeab47db549507d3ee4e0654e8677138ab8f8c18ece3] <==
	I1124 03:12:21.119565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 03:12:51.131024       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-603010 -n no-preload-603010
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-603010 -n no-preload-603010: exit status 2 (318.574453ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-603010 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-603010
helpers_test.go:243: (dbg) docker inspect no-preload-603010:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845",
	        "Created": "2025-11-24T03:10:43.847831353Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 658004,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:12:06.634961378Z",
	            "FinishedAt": "2025-11-24T03:12:05.766626103Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845/hostname",
	        "HostsPath": "/var/lib/docker/containers/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845/hosts",
	        "LogPath": "/var/lib/docker/containers/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845/6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845-json.log",
	        "Name": "/no-preload-603010",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-603010:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-603010",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6cf4d6c6dc34016d533a023ff6999c0549fd0891c2bcc2d01951669dd101a845",
	                "LowerDir": "/var/lib/docker/overlay2/883ce309da55410098919db1c8e27882341f46628e370779f1e1f648adf5829e-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/883ce309da55410098919db1c8e27882341f46628e370779f1e1f648adf5829e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/883ce309da55410098919db1c8e27882341f46628e370779f1e1f648adf5829e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/883ce309da55410098919db1c8e27882341f46628e370779f1e1f648adf5829e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-603010",
	                "Source": "/var/lib/docker/volumes/no-preload-603010/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-603010",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-603010",
	                "name.minikube.sigs.k8s.io": "no-preload-603010",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "281ef9322f023703a943204ee9ffc8d29e01369033d640a5c45ee0792c21fb26",
	            "SandboxKey": "/var/run/docker/netns/281ef9322f02",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33493"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33494"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33497"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33495"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33496"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-603010": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6fb41680caede660e77e75cbbc4bea8a2931e68f7736aa43850d10472e9557bd",
	                    "EndpointID": "ba54ef7afa710fa53c8fb56a6f238e95db2d97a616cc792bb45634538f8d22bd",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "fe:1d:9c:72:30:d9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-603010",
	                        "6cf4d6c6dc34"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603010 -n no-preload-603010
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603010 -n no-preload-603010: exit status 2 (314.908639ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-603010 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-603010 logs -n 25: (1.145346106s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p old-k8s-version-579951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p old-k8s-version-579951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p newest-cni-438041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ start   │ -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:11 UTC │ 24 Nov 25 03:11 UTC │
	│ image   │ newest-cni-438041 image list --format=json                                                                                                                                                                                                    │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p newest-cni-438041 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-993813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p no-preload-603010 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                                                                                          │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p no-preload-603010 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                                                                                          │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p disable-driver-mounts-242597                                                                                                                                                                                                               │ disable-driver-mounts-242597 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ image   │ old-k8s-version-579951 image list --format=json                                                                                                                                                                                               │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p old-k8s-version-579951 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ -p old-k8s-version-579951                                                                                                                                                                                                                     │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p old-k8s-version-579951                                                                                                                                                                                                                     │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable metrics-server -p embed-certs-284604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ stop    │ -p embed-certs-284604 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ image   │ default-k8s-diff-port-993813 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ pause   │ -p default-k8s-diff-port-993813 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ image   │ no-preload-603010 image list --format=json                                                                                                                                                                                                    │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ pause   │ -p no-preload-603010 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-993813                                                                                                                                                                                                               │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:12:09
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:12:09.055015  658811 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:12:09.055230  658811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:09.055247  658811 out.go:374] Setting ErrFile to fd 2...
	I1124 03:12:09.055253  658811 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:12:09.055468  658811 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:12:09.055909  658811 out.go:368] Setting JSON to false
	I1124 03:12:09.056956  658811 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6876,"bootTime":1763947053,"procs":297,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:12:09.057009  658811 start.go:143] virtualization: kvm guest
	I1124 03:12:09.058671  658811 out.go:179] * [embed-certs-284604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:12:09.059850  658811 notify.go:221] Checking for updates...
	I1124 03:12:09.059855  658811 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:12:09.061128  658811 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:12:09.062317  658811 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:09.063358  658811 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:12:09.064255  658811 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:12:09.065078  658811 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:12:09.066407  658811 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.066509  658811 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.066589  658811 config.go:182] Loaded profile config "old-k8s-version-579951": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 03:12:09.066666  658811 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:12:09.089713  658811 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:12:09.089855  658811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:09.145948  658811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:12:09.135562124 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:09.146071  658811 docker.go:319] overlay module found
	I1124 03:12:09.147708  658811 out.go:179] * Using the docker driver based on user configuration
	I1124 03:12:09.148714  658811 start.go:309] selected driver: docker
	I1124 03:12:09.148737  658811 start.go:927] validating driver "docker" against <nil>
	I1124 03:12:09.148747  658811 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:12:09.149338  658811 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:12:09.210343  658811 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-24 03:12:09.200351707 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:12:09.210534  658811 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:12:09.210794  658811 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:09.212381  658811 out.go:179] * Using Docker driver with root privileges
	I1124 03:12:09.213398  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:09.213482  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:09.213497  658811 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:12:09.213574  658811 start.go:353] cluster config:
	{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:09.214730  658811 out.go:179] * Starting "embed-certs-284604" primary control-plane node in "embed-certs-284604" cluster
	I1124 03:12:09.215613  658811 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:12:09.216663  658811 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:12:09.217654  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:09.217694  658811 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:12:09.217703  658811 cache.go:65] Caching tarball of preloaded images
	I1124 03:12:09.217732  658811 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:12:09.217791  658811 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:12:09.217808  658811 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:12:09.217977  658811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:12:09.218021  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json: {Name:mkd4898576ebe0ebf6d2ca35fddd33eac8f127df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:09.239944  658811 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:12:09.239962  658811 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:12:09.239976  658811 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:12:09.240004  658811 start.go:360] acquireMachinesLock for embed-certs-284604: {Name:mkd39be5908e1d289ed5af40b6c2b1c510beffd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:12:09.240088  658811 start.go:364] duration metric: took 68.665µs to acquireMachinesLock for "embed-certs-284604"
	I1124 03:12:09.240109  658811 start.go:93] Provisioning new machine with config: &{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:09.240182  658811 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:12:05.014758  656542 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-993813" ...
	I1124 03:12:05.014805  656542 cli_runner.go:164] Run: docker start default-k8s-diff-port-993813
	I1124 03:12:05.297424  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:05.316835  656542 kic.go:430] container "default-k8s-diff-port-993813" state is running.
	I1124 03:12:05.317309  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:05.336690  656542 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/config.json ...
	I1124 03:12:05.336923  656542 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:05.336992  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:05.356564  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:05.356863  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:05.356907  656542 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:05.357642  656542 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:39256->127.0.0.1:33488: read: connection reset by peer
	I1124 03:12:08.497704  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:12:08.497744  656542 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-993813"
	I1124 03:12:08.497799  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:08.516284  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:08.516620  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:08.516642  656542 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-993813 && echo "default-k8s-diff-port-993813" | sudo tee /etc/hostname
	I1124 03:12:08.664299  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-993813
	
	I1124 03:12:08.664399  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:08.683215  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:08.683424  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:08.683440  656542 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-993813' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-993813/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-993813' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:08.824495  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:08.824534  656542 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:08.824571  656542 ubuntu.go:190] setting up certificates
	I1124 03:12:08.824597  656542 provision.go:84] configureAuth start
	I1124 03:12:08.824659  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:08.842592  656542 provision.go:143] copyHostCerts
	I1124 03:12:08.842639  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:08.842651  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:08.842701  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:08.842805  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:08.842813  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:08.842838  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:08.842940  656542 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:08.842950  656542 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:08.842981  656542 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:08.843051  656542 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-993813 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-993813 localhost minikube]
	I1124 03:12:08.993088  656542 provision.go:177] copyRemoteCerts
	I1124 03:12:08.993141  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:08.993180  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.010481  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.112610  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:09.134182  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 03:12:09.153393  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:12:09.173516  656542 provision.go:87] duration metric: took 348.902104ms to configureAuth
	I1124 03:12:09.173547  656542 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:09.173717  656542 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:09.173820  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.195519  656542 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:09.195738  656542 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33488 <nil> <nil>}
	I1124 03:12:09.195756  656542 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:09.551404  656542 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:09.551434  656542 machine.go:97] duration metric: took 4.214494542s to provisionDockerMachine
	I1124 03:12:09.551449  656542 start.go:293] postStartSetup for "default-k8s-diff-port-993813" (driver="docker")
	I1124 03:12:09.551463  656542 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:09.551533  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:09.551574  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.572440  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.684044  656542 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:09.688328  656542 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:09.688354  656542 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:09.688365  656542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:09.688414  656542 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:09.688488  656542 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:09.688660  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:09.696023  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:09.725715  656542 start.go:296] duration metric: took 174.248037ms for postStartSetup
	I1124 03:12:09.725795  656542 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:09.725851  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.747235  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:06.610202  657716 out.go:252] * Restarting existing docker container for "no-preload-603010" ...
	I1124 03:12:06.610267  657716 cli_runner.go:164] Run: docker start no-preload-603010
	I1124 03:12:06.895418  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:06.913279  657716 kic.go:430] container "no-preload-603010" state is running.
	I1124 03:12:06.913694  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:06.931543  657716 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/config.json ...
	I1124 03:12:06.931779  657716 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:06.931840  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:06.949180  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:06.949422  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:06.949436  657716 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:06.950106  657716 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53738->127.0.0.1:33493: read: connection reset by peer
	I1124 03:12:10.094410  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-603010
	
	I1124 03:12:10.094455  657716 ubuntu.go:182] provisioning hostname "no-preload-603010"
	I1124 03:12:10.094548  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.117277  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.117614  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.117637  657716 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-603010 && echo "no-preload-603010" | sudo tee /etc/hostname
	I1124 03:12:10.272082  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-603010
	
	I1124 03:12:10.272162  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.293197  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.293525  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.293557  657716 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-603010' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-603010/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-603010' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:10.440289  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:10.440322  657716 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:10.440350  657716 ubuntu.go:190] setting up certificates
	I1124 03:12:10.440374  657716 provision.go:84] configureAuth start
	I1124 03:12:10.440443  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:10.458672  657716 provision.go:143] copyHostCerts
	I1124 03:12:10.458743  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:10.458766  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:10.458857  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:10.459021  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:10.459037  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:10.459080  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:10.459183  657716 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:10.459195  657716 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:10.459232  657716 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:10.459323  657716 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.no-preload-603010 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-603010]
	I1124 03:12:10.546420  657716 provision.go:177] copyRemoteCerts
	I1124 03:12:10.546503  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:10.546552  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.564799  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:10.669343  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:10.687953  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:12:10.707320  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:12:10.728398  657716 provision.go:87] duration metric: took 288.002675ms to configureAuth
	I1124 03:12:10.728450  657716 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:10.728791  657716 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:10.728992  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:10.754544  657716 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:10.754857  657716 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33493 <nil> <nil>}
	I1124 03:12:10.754907  657716 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:09.846210  656542 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:09.851045  656542 fix.go:56] duration metric: took 4.853815531s for fixHost
	I1124 03:12:09.851067  656542 start.go:83] releasing machines lock for "default-k8s-diff-port-993813", held for 4.853861223s
	I1124 03:12:09.851139  656542 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-993813
	I1124 03:12:09.871679  656542 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:09.871744  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.871767  656542 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:09.871859  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:09.897665  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.897832  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:09.996390  656542 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:10.070447  656542 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:10.108350  656542 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:10.113659  656542 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:10.113732  656542 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:10.122258  656542 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:12:10.122274  656542 start.go:496] detecting cgroup driver to use...
	I1124 03:12:10.122301  656542 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:10.122333  656542 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:10.138420  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:10.151623  656542 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:10.151696  656542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:10.169717  656542 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:10.185403  656542 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:10.268937  656542 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:10.361626  656542 docker.go:234] disabling docker service ...
	I1124 03:12:10.361713  656542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:10.376259  656542 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:10.389709  656542 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:10.493317  656542 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:10.581163  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:10.594309  656542 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:10.608489  656542 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:10.608559  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.618090  656542 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:10.618147  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.629142  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.639755  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.648289  656542 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:10.657390  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.667835  656542 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.677148  656542 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:10.686554  656542 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:10.694262  656542 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:10.701983  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:10.784645  656542 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:13.176259  656542 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.391580237s)
	I1124 03:12:13.176297  656542 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:13.176344  656542 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:13.182771  656542 start.go:564] Will wait 60s for crictl version
	I1124 03:12:13.182920  656542 ssh_runner.go:195] Run: which crictl
	I1124 03:12:13.188282  656542 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:13.221129  656542 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:13.221208  656542 ssh_runner.go:195] Run: crio --version
	I1124 03:12:13.256022  656542 ssh_runner.go:195] Run: crio --version
	I1124 03:12:13.289098  656542 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	W1124 03:12:09.667322  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:11.810684  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:09.241811  658811 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:12:09.242074  658811 start.go:159] libmachine.API.Create for "embed-certs-284604" (driver="docker")
	I1124 03:12:09.242107  658811 client.go:173] LocalClient.Create starting
	I1124 03:12:09.242186  658811 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem
	I1124 03:12:09.242224  658811 main.go:143] libmachine: Decoding PEM data...
	I1124 03:12:09.242246  658811 main.go:143] libmachine: Parsing certificate...
	I1124 03:12:09.242326  658811 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem
	I1124 03:12:09.242354  658811 main.go:143] libmachine: Decoding PEM data...
	I1124 03:12:09.242374  658811 main.go:143] libmachine: Parsing certificate...
	I1124 03:12:09.242824  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:12:09.259427  658811 cli_runner.go:211] docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:12:09.259477  658811 network_create.go:284] running [docker network inspect embed-certs-284604] to gather additional debugging logs...
	I1124 03:12:09.259492  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604
	W1124 03:12:09.275004  658811 cli_runner.go:211] docker network inspect embed-certs-284604 returned with exit code 1
	I1124 03:12:09.275029  658811 network_create.go:287] error running [docker network inspect embed-certs-284604]: docker network inspect embed-certs-284604: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-284604 not found
	I1124 03:12:09.275039  658811 network_create.go:289] output of [docker network inspect embed-certs-284604]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-284604 not found
	
	** /stderr **
	I1124 03:12:09.275132  658811 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:09.292074  658811 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-58688175ab6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:9d:6a:fc:5c:13} reservation:<nil>}
	I1124 03:12:09.292745  658811 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-bf033568456f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:42:35:42:16:23:28} reservation:<nil>}
	I1124 03:12:09.293207  658811 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-4ecb12099844 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ce:07:19:c6:91:6e} reservation:<nil>}
	I1124 03:12:09.293801  658811 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-50b2e4e61586 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:4a:0d:88:19:7d:df} reservation:<nil>}
	I1124 03:12:09.294406  658811 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6fb41680caed IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ee:a0:6c:44:95:b2} reservation:<nil>}
	I1124 03:12:09.295273  658811 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001eef7f0}
	I1124 03:12:09.295296  658811 network_create.go:124] attempt to create docker network embed-certs-284604 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1124 03:12:09.295333  658811 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-284604 embed-certs-284604
	I1124 03:12:09.341016  658811 network_create.go:108] docker network embed-certs-284604 192.168.94.0/24 created
	I1124 03:12:09.341044  658811 kic.go:121] calculated static IP "192.168.94.2" for the "embed-certs-284604" container
	I1124 03:12:09.341097  658811 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:12:09.358710  658811 cli_runner.go:164] Run: docker volume create embed-certs-284604 --label name.minikube.sigs.k8s.io=embed-certs-284604 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:12:09.377491  658811 oci.go:103] Successfully created a docker volume embed-certs-284604
	I1124 03:12:09.377565  658811 cli_runner.go:164] Run: docker run --rm --name embed-certs-284604-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-284604 --entrypoint /usr/bin/test -v embed-certs-284604:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:12:09.757637  658811 oci.go:107] Successfully prepared a docker volume embed-certs-284604
	I1124 03:12:09.757726  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:09.757742  658811 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:12:09.757816  658811 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-284604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:12:13.055592  658811 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-284604:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (3.297719307s)
	I1124 03:12:13.055632  658811 kic.go:203] duration metric: took 3.29788472s to extract preloaded images to volume ...
	W1124 03:12:13.055721  658811 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 03:12:13.055758  658811 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 03:12:13.055810  658811 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:12:13.124836  658811 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-284604 --name embed-certs-284604 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-284604 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-284604 --network embed-certs-284604 --ip 192.168.94.2 --volume embed-certs-284604:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:12:13.468642  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Running}}
	I1124 03:12:13.493010  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.520114  658811 cli_runner.go:164] Run: docker exec embed-certs-284604 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:12:13.579438  658811 oci.go:144] the created container "embed-certs-284604" has a running status.
	I1124 03:12:13.579473  658811 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa...
	I1124 03:12:13.686392  658811 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:12:13.719014  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.744934  658811 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:12:13.744979  658811 kic_runner.go:114] Args: [docker exec --privileged embed-certs-284604 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:12:13.804379  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:13.833184  658811 machine.go:94] provisionDockerMachine start ...
	I1124 03:12:13.833391  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:13.865266  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:13.865635  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:13.865670  658811 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:12:13.866448  658811 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55158->127.0.0.1:33498: read: connection reset by peer
	I1124 03:12:13.290552  656542 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-993813 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:13.314170  656542 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:13.318716  656542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:13.333300  656542 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:13.333436  656542 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:13.333523  656542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:13.375001  656542 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:13.375027  656542 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:12:13.375078  656542 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:13.407152  656542 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:13.407180  656542 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:13.407190  656542 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1124 03:12:13.407342  656542 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-993813 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:13.407444  656542 ssh_runner.go:195] Run: crio config
	I1124 03:12:13.468159  656542 cni.go:84] Creating CNI manager for ""
	I1124 03:12:13.468191  656542 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:13.468220  656542 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:13.468251  656542 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-993813 NodeName:default-k8s-diff-port-993813 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:13.468425  656542 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-993813"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:13.468485  656542 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:13.480922  656542 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:13.480989  656542 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:13.491437  656542 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1124 03:12:13.510538  656542 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:13.531599  656542 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 03:12:13.550625  656542 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:13.557123  656542 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:13.570105  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:13.687069  656542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:13.711246  656542 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813 for IP: 192.168.76.2
	I1124 03:12:13.711268  656542 certs.go:195] generating shared ca certs ...
	I1124 03:12:13.711287  656542 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:13.711456  656542 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:13.711513  656542 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:13.711526  656542 certs.go:257] generating profile certs ...
	I1124 03:12:13.711642  656542 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/client.key
	I1124 03:12:13.711706  656542 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key.200cd619
	I1124 03:12:13.711753  656542 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key
	I1124 03:12:13.711996  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:13.712051  656542 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:13.712065  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:13.712101  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:13.712139  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:13.712175  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:13.712240  656542 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:13.712851  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:13.744604  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:13.773924  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:13.797454  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:13.831783  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1124 03:12:13.870484  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:13.900124  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:13.922822  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/default-k8s-diff-port-993813/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:12:13.948171  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:13.977351  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:14.003032  656542 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:14.029032  656542 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:14.044929  656542 ssh_runner.go:195] Run: openssl version
	I1124 03:12:14.055102  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:14.069569  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.074149  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.074206  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:14.129455  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:14.139467  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:14.150460  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.155547  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.155598  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:14.213122  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:14.224488  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:14.235043  656542 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.239741  656542 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.239796  656542 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:14.296275  656542 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:14.307247  656542 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:14.315784  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:12:14.374911  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:12:14.452037  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:12:14.514532  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:12:14.577046  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:12:14.634822  656542 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:12:14.697600  656542 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-993813 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-993813 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:14.697704  656542 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:14.697759  656542 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:14.736428  656542 cri.go:89] found id: "9d08a55f25f2dea3825f91aa9365cd9a3093b4582eda50f03b22c2ecc00f92c6"
	I1124 03:12:14.736451  656542 cri.go:89] found id: "a7d5f73dd018d0ee98a3ef524ea2c83739dc8c07b34a7298ffb1d288db659329"
	I1124 03:12:14.736458  656542 cri.go:89] found id: "dd990c6cdcef7e1e7305b9fc20b7615dfb761cbe8c5d42a1f61c8b41406cd0a7"
	I1124 03:12:14.736462  656542 cri.go:89] found id: "11357ba44da7473554ad2b8f1e58b742de06b9155b164b6c83a5d2f9beb7830e"
	I1124 03:12:14.736466  656542 cri.go:89] found id: ""
	I1124 03:12:14.736511  656542 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:12:14.754070  656542 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:14Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:14.754156  656542 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:14.765200  656542 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:12:14.765224  656542 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:12:14.765273  656542 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:12:14.773243  656542 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:12:14.773947  656542 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-993813" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:14.774328  656542 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-993813" cluster setting kubeconfig missing "default-k8s-diff-port-993813" context setting]
	I1124 03:12:14.774925  656542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.776519  656542 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:12:14.785657  656542 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1124 03:12:14.785687  656542 kubeadm.go:602] duration metric: took 20.455875ms to restartPrimaryControlPlane
	I1124 03:12:14.785704  656542 kubeadm.go:403] duration metric: took 88.114399ms to StartCluster
	I1124 03:12:14.785722  656542 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.785796  656542 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:14.786941  656542 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:14.787180  656542 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:14.787429  656542 config.go:182] Loaded profile config "default-k8s-diff-port-993813": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:14.787487  656542 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:14.787568  656542 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.787584  656542 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.787592  656542 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:12:14.787615  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.788183  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.788464  656542 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.788516  656542 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-993813"
	I1124 03:12:14.788466  656542 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-993813"
	I1124 03:12:14.788738  656542 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.788750  656542 addons.go:248] addon dashboard should already be in state true
	I1124 03:12:14.788782  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.789431  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.789731  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.792034  656542 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:14.793166  656542 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:14.820828  656542 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:12:14.821632  656542 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-993813"
	W1124 03:12:14.821655  656542 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:12:14.821731  656542 host.go:66] Checking if "default-k8s-diff-port-993813" exists ...
	I1124 03:12:14.821909  656542 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:12:14.822084  656542 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:14.822112  656542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:14.822188  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.822548  656542 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-993813 --format={{.State.Status}}
	I1124 03:12:14.827335  656542 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:12:13.173638  657716 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:13.173665  657716 machine.go:97] duration metric: took 6.241868553s to provisionDockerMachine
	I1124 03:12:13.173679  657716 start.go:293] postStartSetup for "no-preload-603010" (driver="docker")
	I1124 03:12:13.173692  657716 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:13.173754  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:13.173803  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.199819  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.311414  657716 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:13.316263  657716 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:13.316292  657716 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:13.316304  657716 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:13.316362  657716 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:13.316451  657716 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:13.316564  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:13.330333  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:13.349678  657716 start.go:296] duration metric: took 175.98281ms for postStartSetup
	I1124 03:12:13.349757  657716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:13.349803  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.372668  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.477580  657716 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:13.483572  657716 fix.go:56] duration metric: took 6.891356705s for fixHost
	I1124 03:12:13.483602  657716 start.go:83] releasing machines lock for "no-preload-603010", held for 6.891418388s
	I1124 03:12:13.483679  657716 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-603010
	I1124 03:12:13.509057  657716 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:13.509123  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.509169  657716 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:13.509281  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:13.533830  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.535423  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:13.716640  657716 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:13.727633  657716 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:13.784701  657716 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:13.789877  657716 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:13.789964  657716 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:13.799956  657716 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:12:13.799989  657716 start.go:496] detecting cgroup driver to use...
	I1124 03:12:13.800021  657716 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:13.800080  657716 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:13.821650  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:13.845364  657716 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:13.845437  657716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:13.876223  657716 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:13.896810  657716 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:14.018144  657716 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:14.133192  657716 docker.go:234] disabling docker service ...
	I1124 03:12:14.133276  657716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:14.151812  657716 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:14.167561  657716 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:14.282838  657716 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:14.401610  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:14.417930  657716 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:14.437107  657716 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:14.437170  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.449631  657716 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:14.449698  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.462463  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.477641  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.490417  657716 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:14.504273  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.516484  657716 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.526509  657716 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:14.538280  657716 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:14.546998  657716 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:14.555574  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:14.685636  657716 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:14.944749  657716 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:14.944917  657716 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:14.950036  657716 start.go:564] Will wait 60s for crictl version
	I1124 03:12:14.950115  657716 ssh_runner.go:195] Run: which crictl
	I1124 03:12:14.954328  657716 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:14.985292  657716 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:14.985374  657716 ssh_runner.go:195] Run: crio --version
	I1124 03:12:15.030503  657716 ssh_runner.go:195] Run: crio --version
	I1124 03:12:15.075694  657716 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:12:15.076822  657716 cli_runner.go:164] Run: docker network inspect no-preload-603010 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:15.102488  657716 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:15.108702  657716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:15.124431  657716 kubeadm.go:884] updating cluster {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:15.124588  657716 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:15.124636  657716 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:15.167486  657716 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:15.167521  657716 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:15.167539  657716 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 03:12:15.167821  657716 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-603010 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:15.167925  657716 ssh_runner.go:195] Run: crio config
	I1124 03:12:15.235069  657716 cni.go:84] Creating CNI manager for ""
	I1124 03:12:15.235092  657716 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:15.235110  657716 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:15.235137  657716 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-603010 NodeName:no-preload-603010 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:15.235315  657716 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-603010"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:15.235402  657716 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:15.246426  657716 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:15.246486  657716 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:15.255073  657716 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 03:12:15.274174  657716 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:15.291964  657716 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1124 03:12:15.310704  657716 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:15.315241  657716 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:15.329049  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:15.444004  657716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:15.468249  657716 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010 for IP: 192.168.85.2
	I1124 03:12:15.468275  657716 certs.go:195] generating shared ca certs ...
	I1124 03:12:15.468303  657716 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:15.468461  657716 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:15.468527  657716 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:15.468545  657716 certs.go:257] generating profile certs ...
	I1124 03:12:15.468671  657716 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/client.key
	I1124 03:12:15.468756  657716 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key.df111738
	I1124 03:12:15.468820  657716 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key
	I1124 03:12:15.469056  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:15.469155  657716 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:15.469190  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:15.469235  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:15.469307  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:15.469360  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:15.469452  657716 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:15.470423  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:15.492954  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:15.516840  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:15.539720  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:15.572434  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:12:15.602383  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:15.627969  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:15.650700  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/no-preload-603010/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:12:15.671263  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:15.692710  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:15.715510  657716 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:15.740163  657716 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:15.756242  657716 ssh_runner.go:195] Run: openssl version
	I1124 03:12:15.764455  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:15.774930  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.779615  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.779675  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:15.837760  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:15.848860  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:15.859402  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.864242  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.864304  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:15.923088  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:15.933908  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:15.944242  657716 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:15.949198  657716 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:15.949248  657716 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:16.007273  657716 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:16.018117  657716 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:16.023108  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:12:16.086212  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:12:16.144287  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:12:16.203439  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:12:16.267980  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:12:16.329154  657716 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:12:16.391972  657716 kubeadm.go:401] StartCluster: {Name:no-preload-603010 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-603010 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:16.392083  657716 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:16.392153  657716 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:16.431895  657716 cri.go:89] found id: "3dc1c0625a30c329549c40b7e0c43d1ab4d818ccc0fb85ea668066af23a327d3"
	I1124 03:12:16.431924  657716 cri.go:89] found id: "4e8e84f339bed2139577a34b2e6715ab1a8d9a3b425a16886245d61781115cf0"
	I1124 03:12:16.431930  657716 cri.go:89] found id: "7294ac6825ca8afcd936803374aa461bcb4d637ea8743600784037c5acb225b2"
	I1124 03:12:16.431934  657716 cri.go:89] found id: "767a9908e75937581bb6f9fd527e760a401658cde3d3cf28bc2b66613eab1027"
	I1124 03:12:16.431938  657716 cri.go:89] found id: ""
	I1124 03:12:16.431989  657716 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:12:16.448469  657716 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:12:16Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:12:16.448636  657716 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:16.460046  657716 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:12:16.460066  657716 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:12:16.460159  657716 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:12:16.470578  657716 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:12:16.472039  657716 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-603010" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:16.472691  657716 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-603010" cluster setting kubeconfig missing "no-preload-603010" context setting]
	I1124 03:12:16.473827  657716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.476388  657716 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:12:16.491280  657716 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1124 03:12:16.491307  657716 kubeadm.go:602] duration metric: took 31.234841ms to restartPrimaryControlPlane
	I1124 03:12:16.491317  657716 kubeadm.go:403] duration metric: took 99.357197ms to StartCluster
	I1124 03:12:16.491333  657716 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.491393  657716 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:16.492731  657716 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:16.492990  657716 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:16.493291  657716 config.go:182] Loaded profile config "no-preload-603010": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:16.493352  657716 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:16.493441  657716 addons.go:70] Setting storage-provisioner=true in profile "no-preload-603010"
	I1124 03:12:16.493465  657716 addons.go:239] Setting addon storage-provisioner=true in "no-preload-603010"
	W1124 03:12:16.493473  657716 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:12:16.493503  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.494027  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.494266  657716 addons.go:70] Setting dashboard=true in profile "no-preload-603010"
	I1124 03:12:16.494322  657716 addons.go:239] Setting addon dashboard=true in "no-preload-603010"
	I1124 03:12:16.494338  657716 addons.go:70] Setting default-storageclass=true in profile "no-preload-603010"
	I1124 03:12:16.494434  657716 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-603010"
	W1124 03:12:16.494361  657716 addons.go:248] addon dashboard should already be in state true
	I1124 03:12:16.494570  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.494863  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.495005  657716 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:16.495647  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.496468  657716 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:16.527269  657716 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:12:16.528480  657716 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:12:16.528517  657716 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1124 03:12:14.168310  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:16.172923  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:18.176795  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:14.828319  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:12:14.828372  656542 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:12:14.828432  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.858092  656542 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:14.858118  656542 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:14.858192  656542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-993813
	I1124 03:12:14.865650  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.866433  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.895242  656542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33488 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/default-k8s-diff-port-993813/id_rsa Username:docker}
	I1124 03:12:14.975501  656542 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:14.992389  656542 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:12:15.008151  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:15.016186  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:12:15.016211  656542 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:12:15.031574  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:15.042522  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:12:15.042540  656542 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:12:15.074331  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:12:15.074365  656542 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:12:15.109090  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:12:15.109113  656542 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:12:15.128161  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:12:15.128184  656542 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:12:15.147874  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:12:15.147903  656542 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:12:15.168191  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:12:15.168211  656542 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:12:15.185637  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:12:15.185661  656542 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:12:15.202994  656542 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:15.203016  656542 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:12:15.221608  656542 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:17.996962  656542 node_ready.go:49] node "default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:17.997067  656542 node_ready.go:38] duration metric: took 3.004589581s for node "default-k8s-diff-port-993813" to be "Ready" ...
	I1124 03:12:17.997096  656542 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:17.997184  656542 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:18.834613  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.826385361s)
	I1124 03:12:18.834690  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.803092411s)
	I1124 03:12:18.834853  656542 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.613213665s)
	I1124 03:12:18.834988  656542 api_server.go:72] duration metric: took 4.047778988s to wait for apiserver process to appear ...
	I1124 03:12:18.835771  656542 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:18.835800  656542 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:12:18.838614  656542 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-993813 addons enable metrics-server
	
	I1124 03:12:18.844882  656542 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 03:12:17.043130  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:12:17.043165  658811 ubuntu.go:182] provisioning hostname "embed-certs-284604"
	I1124 03:12:17.043247  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.069679  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:17.070109  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:17.070142  658811 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-284604 && echo "embed-certs-284604" | sudo tee /etc/hostname
	I1124 03:12:17.259114  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:12:17.259199  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.284082  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:17.284399  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:17.284433  658811 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-284604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-284604/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-284604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:12:17.452374  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:12:17.452411  658811 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:12:17.452438  658811 ubuntu.go:190] setting up certificates
	I1124 03:12:17.452452  658811 provision.go:84] configureAuth start
	I1124 03:12:17.452521  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:17.483434  658811 provision.go:143] copyHostCerts
	I1124 03:12:17.483502  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:12:17.483519  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:12:17.483580  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:12:17.483712  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:12:17.483725  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:12:17.483764  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:12:17.483851  658811 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:12:17.483858  658811 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:12:17.483909  658811 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:12:17.483990  658811 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.embed-certs-284604 san=[127.0.0.1 192.168.94.2 embed-certs-284604 localhost minikube]
	I1124 03:12:17.911206  658811 provision.go:177] copyRemoteCerts
	I1124 03:12:17.911335  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:12:17.911394  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:17.943914  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.069938  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:12:18.098447  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 03:12:18.124997  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:12:18.162531  658811 provision.go:87] duration metric: took 710.055135ms to configureAuth
	I1124 03:12:18.162560  658811 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:12:18.162764  658811 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:18.162877  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.187248  658811 main.go:143] libmachine: Using SSH client type: native
	I1124 03:12:18.187553  658811 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33498 <nil> <nil>}
	I1124 03:12:18.187575  658811 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:12:18.557227  658811 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:12:18.557257  658811 machine.go:97] duration metric: took 4.723983027s to provisionDockerMachine
	I1124 03:12:18.557270  658811 client.go:176] duration metric: took 9.315155053s to LocalClient.Create
	I1124 03:12:18.557286  658811 start.go:167] duration metric: took 9.315214435s to libmachine.API.Create "embed-certs-284604"
	I1124 03:12:18.557298  658811 start.go:293] postStartSetup for "embed-certs-284604" (driver="docker")
	I1124 03:12:18.557310  658811 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:12:18.557379  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:12:18.557432  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.587404  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.715877  658811 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:12:18.721275  658811 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:12:18.721309  658811 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:12:18.721322  658811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:12:18.721381  658811 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:12:18.721473  658811 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:12:18.721597  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:12:18.732645  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:18.763370  658811 start.go:296] duration metric: took 206.056597ms for postStartSetup
	I1124 03:12:18.763732  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:18.791899  658811 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:12:18.792183  658811 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:12:18.792233  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.820806  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.936530  658811 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:12:18.948570  658811 start.go:128] duration metric: took 9.708372989s to createHost
	I1124 03:12:18.948686  658811 start.go:83] releasing machines lock for "embed-certs-284604", held for 9.708587492s
	I1124 03:12:18.948771  658811 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:12:18.973190  658811 ssh_runner.go:195] Run: cat /version.json
	I1124 03:12:18.973375  658811 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:12:18.973512  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.973582  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:18.998620  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.999698  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:18.845938  656542 addons.go:530] duration metric: took 4.058450553s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 03:12:18.846295  656542 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:18.846717  656542 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:12:19.335969  656542 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1124 03:12:19.342155  656542 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1124 03:12:19.343392  656542 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:19.343421  656542 api_server.go:131] duration metric: took 507.639836ms to wait for apiserver health ...
	I1124 03:12:19.343433  656542 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:19.347170  656542 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:19.347220  656542 system_pods.go:61] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:19.347233  656542 system_pods.go:61] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:19.347244  656542 system_pods.go:61] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:19.347253  656542 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:19.347263  656542 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:19.347271  656542 system_pods.go:61] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:19.347279  656542 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:19.347290  656542 system_pods.go:61] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:19.347300  656542 system_pods.go:74] duration metric: took 3.857291ms to wait for pod list to return data ...
	I1124 03:12:19.347309  656542 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:19.350005  656542 default_sa.go:45] found service account: "default"
	I1124 03:12:19.350027  656542 default_sa.go:55] duration metric: took 2.709767ms for default service account to be created ...
	I1124 03:12:19.350036  656542 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:19.354450  656542 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:19.354480  656542 system_pods.go:89] "coredns-66bc5c9577-w62hm" [4c6f1012-3439-464e-bf6a-4c175f98d54d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:19.354492  656542 system_pods.go:89] "etcd-default-k8s-diff-port-993813" [fa9beeb7-44b3-40d8-a4a7-f0cf76f62b09] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:19.354502  656542 system_pods.go:89] "kindnet-w6sh6" [ff565cd3-e1be-4525-ab1f-465211f42f79] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:19.354512  656542 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-993813" [40aa978c-8e5e-4068-95ec-2e5b93197c27] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:19.354525  656542 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-993813" [83a05b10-60fa-4b44-92fc-fd90b9fbe45b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:19.354534  656542 system_pods.go:89] "kube-proxy-xgjzs" [82b10446-c8e9-4d11-aa15-ed7792a91865] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:19.354542  656542 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-993813" [33518b86-d7d1-42d8-ae53-2db6c2f5ea41] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:19.354550  656542 system_pods.go:89] "storage-provisioner" [50428c8a-8e0e-48d0-ad32-38a93a976ba9] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:19.354560  656542 system_pods.go:126] duration metric: took 4.516416ms to wait for k8s-apps to be running ...
	I1124 03:12:19.354569  656542 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:19.354617  656542 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:19.377699  656542 system_svc.go:56] duration metric: took 23.119925ms WaitForService to wait for kubelet
	I1124 03:12:19.377726  656542 kubeadm.go:587] duration metric: took 4.590516557s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:19.377808  656542 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:19.381785  656542 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:19.381815  656542 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:19.381831  656542 node_conditions.go:105] duration metric: took 4.017737ms to run NodePressure ...
	I1124 03:12:19.381846  656542 start.go:242] waiting for startup goroutines ...
	I1124 03:12:19.381857  656542 start.go:247] waiting for cluster config update ...
	I1124 03:12:19.381883  656542 start.go:256] writing updated cluster config ...
	I1124 03:12:19.382229  656542 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:19.387932  656542 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:19.394333  656542 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:16.529636  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:12:16.529826  657716 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:12:16.529877  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.529719  657716 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:16.530024  657716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:16.530070  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.534729  657716 addons.go:239] Setting addon default-storageclass=true in "no-preload-603010"
	W1124 03:12:16.534754  657716 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:12:16.534783  657716 host.go:66] Checking if "no-preload-603010" exists ...
	I1124 03:12:16.539339  657716 cli_runner.go:164] Run: docker container inspect no-preload-603010 --format={{.State.Status}}
	I1124 03:12:16.565768  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.582397  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.585042  657716 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:16.585070  657716 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:16.585126  657716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-603010
	I1124 03:12:16.617946  657716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33493 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/no-preload-603010/id_rsa Username:docker}
	I1124 03:12:16.706410  657716 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:16.731745  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:12:16.731773  657716 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:12:16.736337  657716 node_ready.go:35] waiting up to 6m0s for node "no-preload-603010" to be "Ready" ...
	I1124 03:12:16.736937  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:16.758823  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:12:16.758847  657716 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:12:16.768684  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:16.788344  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:12:16.788369  657716 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:12:16.806593  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:12:16.806620  657716 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:12:16.847576  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:12:16.847609  657716 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:12:16.867721  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:12:16.867755  657716 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:12:16.886765  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:12:16.886787  657716 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:12:16.907569  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:12:16.907732  657716 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:12:16.929396  657716 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:16.929417  657716 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:12:16.958374  657716 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:12:19.957067  657716 node_ready.go:49] node "no-preload-603010" is "Ready"
	I1124 03:12:19.957111  657716 node_ready.go:38] duration metric: took 3.220732108s for node "no-preload-603010" to be "Ready" ...
	I1124 03:12:19.957131  657716 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:19.957256  657716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:20.880814  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.143842388s)
	I1124 03:12:20.881241  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.112181993s)
	I1124 03:12:21.157660  657716 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.200376454s)
	I1124 03:12:21.157703  657716 api_server.go:72] duration metric: took 4.664681444s to wait for apiserver process to appear ...
	I1124 03:12:21.157713  657716 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:21.157733  657716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:12:21.158403  657716 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.199980339s)
	I1124 03:12:21.160177  657716 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-603010 addons enable metrics-server
	
	I1124 03:12:21.161363  657716 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1124 03:12:19.120481  658811 ssh_runner.go:195] Run: systemctl --version
	I1124 03:12:19.211741  658811 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:12:19.277394  658811 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:12:19.284078  658811 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:12:19.284149  658811 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:12:19.319995  658811 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 03:12:19.320028  658811 start.go:496] detecting cgroup driver to use...
	I1124 03:12:19.320064  658811 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:12:19.320117  658811 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:12:19.345823  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:12:19.367716  658811 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:12:19.367782  658811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:12:19.389799  658811 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:12:19.412438  658811 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:12:19.524730  658811 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:12:19.637210  658811 docker.go:234] disabling docker service ...
	I1124 03:12:19.637286  658811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:12:19.659861  658811 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:12:19.677152  658811 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:12:19.823448  658811 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:12:19.960707  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:12:19.981616  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:12:20.012418  658811 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:12:20.012486  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.058077  658811 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:12:20.058214  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.074742  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.118587  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.135044  658811 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:12:20.151861  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.172656  658811 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.194765  658811 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:12:20.232792  658811 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:12:20.242855  658811 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:12:20.253417  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:20.371692  658811 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:12:21.221343  658811 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:12:21.221440  658811 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:12:21.226905  658811 start.go:564] Will wait 60s for crictl version
	I1124 03:12:21.227016  658811 ssh_runner.go:195] Run: which crictl
	I1124 03:12:21.231693  658811 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:12:21.262514  658811 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:12:21.262603  658811 ssh_runner.go:195] Run: crio --version
	I1124 03:12:21.302192  658811 ssh_runner.go:195] Run: crio --version
	I1124 03:12:21.363037  658811 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:12:21.162777  657716 addons.go:530] duration metric: took 4.669427095s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1124 03:12:21.163688  657716 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:21.163718  657716 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:12:20.668896  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:23.167980  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:21.364543  658811 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:12:21.388019  658811 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:12:21.393290  658811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:21.406629  658811 kubeadm.go:884] updating cluster {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:12:21.406778  658811 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:12:21.406846  658811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:21.445258  658811 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:21.445284  658811 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:12:21.445336  658811 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:12:21.471000  658811 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:12:21.471025  658811 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:12:21.471037  658811 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:12:21.471125  658811 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-284604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:12:21.471186  658811 ssh_runner.go:195] Run: crio config
	I1124 03:12:21.516457  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:21.516480  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:21.516502  658811 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:12:21.516532  658811 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-284604 NodeName:embed-certs-284604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:12:21.516680  658811 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-284604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:12:21.516751  658811 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:12:21.524967  658811 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:12:21.525035  658811 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:12:21.533487  658811 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 03:12:21.547228  658811 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:12:21.640415  658811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 03:12:21.656434  658811 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:12:21.660696  658811 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:12:21.674410  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:21.772584  658811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:21.798340  658811 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604 for IP: 192.168.94.2
	I1124 03:12:21.798360  658811 certs.go:195] generating shared ca certs ...
	I1124 03:12:21.798381  658811 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.798539  658811 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:12:21.798593  658811 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:12:21.798607  658811 certs.go:257] generating profile certs ...
	I1124 03:12:21.798690  658811 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key
	I1124 03:12:21.798708  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt with IP's: []
	I1124 03:12:21.837756  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt ...
	I1124 03:12:21.837790  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.crt: {Name:mk6d8aec213556beda470e3e5188eed1aec5e183 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.838000  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key ...
	I1124 03:12:21.838030  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key: {Name:mk56f44e1d331f82a560e15fe6a3c3ca4602bba2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.838172  658811 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087
	I1124 03:12:21.838189  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 03:12:21.915471  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 ...
	I1124 03:12:21.915494  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087: {Name:mk185605a13bb00cdff0decbde0063003287a88f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.915630  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087 ...
	I1124 03:12:21.915643  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087: {Name:mk1404f69a73d575873220c9d20779709c9db66b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:21.915715  658811 certs.go:382] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt.9041d087 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt
	I1124 03:12:21.915784  658811 certs.go:386] copying /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087 -> /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key
	I1124 03:12:21.915837  658811 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key
	I1124 03:12:21.915852  658811 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt with IP's: []
	I1124 03:12:22.064876  658811 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt ...
	I1124 03:12:22.064923  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt: {Name:mk7bbfb718db4eee243d6b6658f5b6db725b34b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:22.065108  658811 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key ...
	I1124 03:12:22.065140  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key: {Name:mk282c31a6bdbd1f185d5fa986bb6679f789f94a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:22.065488  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:12:22.065564  658811 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:12:22.065576  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:12:22.065602  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:12:22.065630  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:12:22.065654  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:12:22.065702  658811 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:12:22.066383  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:12:22.086471  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:12:22.103602  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:12:22.120085  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:12:22.137488  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:12:22.154084  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:12:22.171055  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:12:22.187877  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:12:22.204407  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:12:22.222560  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:12:22.241380  658811 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:12:22.258066  658811 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:12:22.269950  658811 ssh_runner.go:195] Run: openssl version
	I1124 03:12:22.276120  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:12:22.283870  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.287375  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.287414  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:12:22.321400  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:12:22.329479  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:12:22.338113  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.342815  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.342865  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:12:22.384524  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:12:22.393408  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:12:22.402946  658811 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.406951  658811 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.407009  658811 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:12:22.445501  658811 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:12:22.454521  658811 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:12:22.458152  658811 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:12:22.458212  658811 kubeadm.go:401] StartCluster: {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:12:22.458278  658811 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:12:22.458330  658811 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:12:22.487574  658811 cri.go:89] found id: ""
	I1124 03:12:22.487653  658811 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:12:22.495876  658811 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:12:22.505058  658811 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:12:22.505121  658811 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:12:22.515162  658811 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:12:22.515181  658811 kubeadm.go:158] found existing configuration files:
	
	I1124 03:12:22.515229  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:12:22.525864  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:12:22.525956  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:12:22.535632  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:12:22.545975  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:12:22.546068  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:12:22.556144  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:12:22.566062  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:12:22.566123  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:12:22.576364  658811 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:12:22.587041  658811 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:12:22.587089  658811 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:12:22.596656  658811 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:12:22.678370  658811 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 03:12:22.762592  658811 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1124 03:12:21.400229  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:23.400859  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:21.658606  657716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:12:21.664294  657716 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:12:21.665654  657716 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:21.665685  657716 api_server.go:131] duration metric: took 507.965368ms to wait for apiserver health ...
	I1124 03:12:21.665696  657716 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:21.669523  657716 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:21.669569  657716 system_pods.go:61] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:21.669584  657716 system_pods.go:61] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:21.669600  657716 system_pods.go:61] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:21.669613  657716 system_pods.go:61] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:21.669620  657716 system_pods.go:61] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:21.669631  657716 system_pods.go:61] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:21.669640  657716 system_pods.go:61] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:21.669651  657716 system_pods.go:61] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:21.669661  657716 system_pods.go:74] duration metric: took 3.958242ms to wait for pod list to return data ...
	I1124 03:12:21.669744  657716 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:21.672641  657716 default_sa.go:45] found service account: "default"
	I1124 03:12:21.672665  657716 default_sa.go:55] duration metric: took 2.912794ms for default service account to be created ...
	I1124 03:12:21.672674  657716 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:21.676337  657716 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:21.676367  657716 system_pods.go:89] "coredns-66bc5c9577-9n5xf" [bafc3685-6d22-404d-aedc-6f9d15506617] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:21.676379  657716 system_pods.go:89] "etcd-no-preload-603010" [6716cea4-9dba-43d5-981b-f315ee84d7ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:12:21.676394  657716 system_pods.go:89] "kindnet-7gvgm" [a8d791b5-f165-42db-8345-cdf52ce933d5] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:12:21.676403  657716 system_pods.go:89] "kube-apiserver-no-preload-603010" [b0a1d8f1-94a6-4875-b0d2-4f639c2a427f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:12:21.676411  657716 system_pods.go:89] "kube-controller-manager-no-preload-603010" [bd3556e6-7d2d-4be7-9a6a-dffbe1cfef67] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:12:21.676422  657716 system_pods.go:89] "kube-proxy-swj6c" [b8b75c64-2a2e-4d0c-b1f7-fe242b173db7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:12:21.676433  657716 system_pods.go:89] "kube-scheduler-no-preload-603010" [2d2ebbc4-7e34-43d8-91dc-e4a487ecee17] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:12:21.676441  657716 system_pods.go:89] "storage-provisioner" [332b95a2-035a-46f2-95ee-1bef73dff6a7] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:21.676450  657716 system_pods.go:126] duration metric: took 3.770261ms to wait for k8s-apps to be running ...
	I1124 03:12:21.676459  657716 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:21.676504  657716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:21.690659  657716 system_svc.go:56] duration metric: took 14.192089ms WaitForService to wait for kubelet
	I1124 03:12:21.690686  657716 kubeadm.go:587] duration metric: took 5.197662584s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:21.690707  657716 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:21.693136  657716 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:21.693164  657716 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:21.693184  657716 node_conditions.go:105] duration metric: took 2.469957ms to run NodePressure ...
	I1124 03:12:21.693203  657716 start.go:242] waiting for startup goroutines ...
	I1124 03:12:21.693215  657716 start.go:247] waiting for cluster config update ...
	I1124 03:12:21.693239  657716 start.go:256] writing updated cluster config ...
	I1124 03:12:21.693532  657716 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:21.697901  657716 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:21.701025  657716 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:12:23.706826  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:25.707596  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:25.168947  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:27.669069  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:25.402048  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:27.901054  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:27.707794  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:29.710379  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:29.675678  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	W1124 03:12:32.166267  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:34.784594  658811 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:12:34.784648  658811 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:12:34.784736  658811 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:12:34.784810  658811 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 03:12:34.784870  658811 kubeadm.go:319] OS: Linux
	I1124 03:12:34.784983  658811 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:12:34.785059  658811 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:12:34.785107  658811 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:12:34.785166  658811 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:12:34.785237  658811 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:12:34.785303  658811 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:12:34.785372  658811 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:12:34.785441  658811 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 03:12:34.785518  658811 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:12:34.785647  658811 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:12:34.785738  658811 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:12:34.785806  658811 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:12:34.786978  658811 out.go:252]   - Generating certificates and keys ...
	I1124 03:12:34.787057  658811 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:12:34.787166  658811 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:12:34.787260  658811 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:12:34.787314  658811 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:12:34.787380  658811 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:12:34.787463  658811 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:12:34.787510  658811 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:12:34.787654  658811 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-284604 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:12:34.787713  658811 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:12:34.787835  658811 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-284604 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 03:12:34.787929  658811 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:12:34.787996  658811 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:12:34.788075  658811 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:12:34.788161  658811 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:12:34.788246  658811 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:12:34.788307  658811 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:12:34.788377  658811 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:12:34.788464  658811 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:12:34.788510  658811 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:12:34.788574  658811 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:12:34.788677  658811 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:12:34.789842  658811 out.go:252]   - Booting up control plane ...
	I1124 03:12:34.789955  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:12:34.790029  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:12:34.790102  658811 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:12:34.790202  658811 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:12:34.790286  658811 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:12:34.790369  658811 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:12:34.790438  658811 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:12:34.790470  658811 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:12:34.790573  658811 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:12:34.790662  658811 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:12:34.790715  658811 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001939634s
	I1124 03:12:34.790808  658811 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:12:34.790874  658811 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8443/livez
	I1124 03:12:34.790987  658811 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:12:34.791057  658811 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:12:34.791109  658811 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.83516238s
	I1124 03:12:34.791172  658811 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.120221493s
	I1124 03:12:34.791231  658811 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501624476s
	I1124 03:12:34.791319  658811 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:12:34.791443  658811 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:12:34.791516  658811 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:12:34.791778  658811 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-284604 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:12:34.791865  658811 kubeadm.go:319] [bootstrap-token] Using token: 6opk0j.95uwfc60sd8szhpc
	I1124 03:12:34.793026  658811 out.go:252]   - Configuring RBAC rules ...
	I1124 03:12:34.793125  658811 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:12:34.793213  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:12:34.793344  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:12:34.793455  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:12:34.793557  658811 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:12:34.793642  658811 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:12:34.793774  658811 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:12:34.793810  658811 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:12:34.793851  658811 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:12:34.793857  658811 kubeadm.go:319] 
	I1124 03:12:34.793964  658811 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:12:34.793973  658811 kubeadm.go:319] 
	I1124 03:12:34.794046  658811 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:12:34.794053  658811 kubeadm.go:319] 
	I1124 03:12:34.794074  658811 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:12:34.794151  658811 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:12:34.794229  658811 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:12:34.794239  658811 kubeadm.go:319] 
	I1124 03:12:34.794318  658811 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:12:34.794327  658811 kubeadm.go:319] 
	I1124 03:12:34.794375  658811 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:12:34.794381  658811 kubeadm.go:319] 
	I1124 03:12:34.794424  658811 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:12:34.794490  658811 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:12:34.794554  658811 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:12:34.794560  658811 kubeadm.go:319] 
	I1124 03:12:34.794633  658811 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:12:34.794705  658811 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:12:34.794712  658811 kubeadm.go:319] 
	I1124 03:12:34.794781  658811 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 6opk0j.95uwfc60sd8szhpc \
	I1124 03:12:34.794955  658811 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 \
	I1124 03:12:34.794990  658811 kubeadm.go:319] 	--control-plane 
	I1124 03:12:34.794996  658811 kubeadm.go:319] 
	I1124 03:12:34.795133  658811 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:12:34.795142  658811 kubeadm.go:319] 
	I1124 03:12:34.795208  658811 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 6opk0j.95uwfc60sd8szhpc \
	I1124 03:12:34.795304  658811 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:aff636c2270d73c29dd28aae7e48500433eef0c59620eb2a1b01db4b7d54c9a2 
	I1124 03:12:34.795316  658811 cni.go:84] Creating CNI manager for ""
	I1124 03:12:34.795322  658811 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:12:34.796503  658811 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1124 03:12:29.901574  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:32.399665  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:32.206353  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:34.206828  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:34.667383  650744 pod_ready.go:104] pod "coredns-5dd5756b68-5nwx9" is not "Ready", error: <nil>
	I1124 03:12:35.167626  650744 pod_ready.go:94] pod "coredns-5dd5756b68-5nwx9" is "Ready"
	I1124 03:12:35.167652  650744 pod_ready.go:86] duration metric: took 36.006547637s for pod "coredns-5dd5756b68-5nwx9" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.170471  650744 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.174915  650744 pod_ready.go:94] pod "etcd-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.174952  650744 pod_ready.go:86] duration metric: took 4.460425ms for pod "etcd-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.178276  650744 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.181797  650744 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.181815  650744 pod_ready.go:86] duration metric: took 3.521385ms for pod "kube-apiserver-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.184086  650744 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.364640  650744 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-579951" is "Ready"
	I1124 03:12:35.364666  650744 pod_ready.go:86] duration metric: took 180.561055ms for pod "kube-controller-manager-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.566321  650744 pod_ready.go:83] waiting for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:35.965760  650744 pod_ready.go:94] pod "kube-proxy-r82jh" is "Ready"
	I1124 03:12:35.965786  650744 pod_ready.go:86] duration metric: took 399.441601ms for pod "kube-proxy-r82jh" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.166112  650744 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.564858  650744 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-579951" is "Ready"
	I1124 03:12:36.564911  650744 pod_ready.go:86] duration metric: took 398.774389ms for pod "kube-scheduler-old-k8s-version-579951" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:36.564927  650744 pod_ready.go:40] duration metric: took 37.40842222s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:36.606666  650744 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 03:12:36.609650  650744 out.go:203] 
	W1124 03:12:36.610839  650744 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 03:12:36.611943  650744 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:12:36.613009  650744 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-579951" cluster and "default" namespace by default
	I1124 03:12:34.797545  658811 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:12:34.801904  658811 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:12:34.801919  658811 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:12:34.815659  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:12:35.008985  658811 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:12:35.009118  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-284604 minikube.k8s.io/updated_at=2025_11_24T03_12_35_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=embed-certs-284604 minikube.k8s.io/primary=true
	I1124 03:12:35.009137  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:35.019423  658811 ops.go:34] apiserver oom_adj: -16
	I1124 03:12:35.098937  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:35.600025  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:36.099882  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:36.599914  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:37.099714  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:37.599861  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:38.098989  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:38.599248  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.099379  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.599598  658811 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:12:39.664570  658811 kubeadm.go:1114] duration metric: took 4.655535544s to wait for elevateKubeSystemPrivileges
	I1124 03:12:39.664621  658811 kubeadm.go:403] duration metric: took 17.206413974s to StartCluster
	I1124 03:12:39.664642  658811 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:39.664720  658811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:12:39.666858  658811 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:12:39.667137  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:12:39.667148  658811 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:12:39.667230  658811 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:12:39.667331  658811 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-284604"
	I1124 03:12:39.667356  658811 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-284604"
	I1124 03:12:39.667360  658811 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:12:39.667396  658811 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:12:39.667427  658811 addons.go:70] Setting default-storageclass=true in profile "embed-certs-284604"
	I1124 03:12:39.667451  658811 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-284604"
	I1124 03:12:39.667810  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.667990  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.668614  658811 out.go:179] * Verifying Kubernetes components...
	I1124 03:12:39.670239  658811 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:12:39.693324  658811 addons.go:239] Setting addon default-storageclass=true in "embed-certs-284604"
	I1124 03:12:39.693377  658811 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:12:39.693617  658811 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 03:12:34.900232  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:36.901987  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:39.399311  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:39.693843  658811 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:12:39.695301  658811 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:39.695324  658811 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:12:39.695401  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:39.723273  658811 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:39.723298  658811 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:12:39.723378  658811 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:12:39.730678  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:39.746663  658811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33498 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:12:39.790082  658811 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:12:39.807223  658811 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:12:39.854663  658811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:12:39.859938  658811 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:12:39.988561  658811 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 03:12:39.990213  658811 node_ready.go:35] waiting up to 6m0s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:12:40.170444  658811 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1124 03:12:36.707151  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:39.206261  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:41.206507  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	I1124 03:12:40.171595  658811 addons.go:530] duration metric: took 504.363947ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:12:40.492653  658811 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-284604" context rescaled to 1 replicas
	W1124 03:12:41.992667  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:43.993353  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:41.399566  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:43.899302  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:43.705614  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:45.706618  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:45.993493  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:47.993708  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	W1124 03:12:46.399440  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:48.399607  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:48.205812  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:50.206724  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:50.493353  658811 node_ready.go:57] node "embed-certs-284604" has "Ready":"False" status (will retry)
	I1124 03:12:50.993323  658811 node_ready.go:49] node "embed-certs-284604" is "Ready"
	I1124 03:12:50.993350  658811 node_ready.go:38] duration metric: took 11.003110454s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:12:50.993367  658811 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:12:50.993411  658811 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:12:51.005273  658811 api_server.go:72] duration metric: took 11.338089025s to wait for apiserver process to appear ...
	I1124 03:12:51.005299  658811 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:12:51.005319  658811 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:12:51.010460  658811 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:12:51.011346  658811 api_server.go:141] control plane version: v1.34.1
	I1124 03:12:51.011367  658811 api_server.go:131] duration metric: took 6.06186ms to wait for apiserver health ...
	I1124 03:12:51.011376  658811 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:12:51.014056  658811 system_pods.go:59] 8 kube-system pods found
	I1124 03:12:51.014084  658811 system_pods.go:61] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.014092  658811 system_pods.go:61] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.014101  658811 system_pods.go:61] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.014106  658811 system_pods.go:61] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.014113  658811 system_pods.go:61] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.014119  658811 system_pods.go:61] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.014136  658811 system_pods.go:61] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.014147  658811 system_pods.go:61] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.014155  658811 system_pods.go:74] duration metric: took 2.773001ms to wait for pod list to return data ...
	I1124 03:12:51.014164  658811 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:12:51.016349  658811 default_sa.go:45] found service account: "default"
	I1124 03:12:51.016366  658811 default_sa.go:55] duration metric: took 2.196577ms for default service account to be created ...
	I1124 03:12:51.016373  658811 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:12:51.018741  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.018763  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.018768  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.018774  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.018778  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.018783  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.018787  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.018791  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.018798  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.018817  658811 retry.go:31] will retry after 267.963041ms: missing components: kube-dns
	I1124 03:12:51.291183  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.291223  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.291231  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.291239  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.291244  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.291250  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.291255  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.291260  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.291268  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.291295  658811 retry.go:31] will retry after 316.287047ms: missing components: kube-dns
	I1124 03:12:51.610985  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:51.611019  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:51.611026  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:51.611037  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:51.611045  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:51.611055  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:51.611061  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:51.611066  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:51.611074  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:51.611098  658811 retry.go:31] will retry after 440.03042ms: missing components: kube-dns
	I1124 03:12:52.054793  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:52.054821  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:12:52.054826  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:52.054831  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:52.054835  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:52.054839  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:52.054842  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:52.054845  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:52.054850  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:12:52.054863  658811 retry.go:31] will retry after 498.386661ms: missing components: kube-dns
	I1124 03:12:52.557040  658811 system_pods.go:86] 8 kube-system pods found
	I1124 03:12:52.557071  658811 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Running
	I1124 03:12:52.557079  658811 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running
	I1124 03:12:52.557084  658811 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running
	I1124 03:12:52.557089  658811 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running
	I1124 03:12:52.557095  658811 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running
	I1124 03:12:52.557100  658811 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running
	I1124 03:12:52.557104  658811 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running
	I1124 03:12:52.557110  658811 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Running
	I1124 03:12:52.557120  658811 system_pods.go:126] duration metric: took 1.540739928s to wait for k8s-apps to be running ...
	I1124 03:12:52.557134  658811 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:12:52.557188  658811 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:12:52.570482  658811 system_svc.go:56] duration metric: took 13.341226ms WaitForService to wait for kubelet
	I1124 03:12:52.570511  658811 kubeadm.go:587] duration metric: took 12.903331916s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:12:52.570535  658811 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:12:52.573089  658811 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:12:52.573117  658811 node_conditions.go:123] node cpu capacity is 8
	I1124 03:12:52.573148  658811 node_conditions.go:105] duration metric: took 2.605161ms to run NodePressure ...
	I1124 03:12:52.573166  658811 start.go:242] waiting for startup goroutines ...
	I1124 03:12:52.573175  658811 start.go:247] waiting for cluster config update ...
	I1124 03:12:52.573187  658811 start.go:256] writing updated cluster config ...
	I1124 03:12:52.573408  658811 ssh_runner.go:195] Run: rm -f paused
	I1124 03:12:52.576899  658811 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:52.580189  658811 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-89mzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.584242  658811 pod_ready.go:94] pod "coredns-66bc5c9577-89mzc" is "Ready"
	I1124 03:12:52.584262  658811 pod_ready.go:86] duration metric: took 4.045428ms for pod "coredns-66bc5c9577-89mzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.586066  658811 pod_ready.go:83] waiting for pod "etcd-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.590045  658811 pod_ready.go:94] pod "etcd-embed-certs-284604" is "Ready"
	I1124 03:12:52.590064  658811 pod_ready.go:86] duration metric: took 3.981268ms for pod "etcd-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.592126  658811 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.595532  658811 pod_ready.go:94] pod "kube-apiserver-embed-certs-284604" is "Ready"
	I1124 03:12:52.595555  658811 pod_ready.go:86] duration metric: took 3.408619ms for pod "kube-apiserver-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.597386  658811 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.980512  658811 pod_ready.go:94] pod "kube-controller-manager-embed-certs-284604" is "Ready"
	I1124 03:12:52.980538  658811 pod_ready.go:86] duration metric: took 383.129867ms for pod "kube-controller-manager-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.181479  658811 pod_ready.go:83] waiting for pod "kube-proxy-bn8fd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.581552  658811 pod_ready.go:94] pod "kube-proxy-bn8fd" is "Ready"
	I1124 03:12:53.581575  658811 pod_ready.go:86] duration metric: took 400.07394ms for pod "kube-proxy-bn8fd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.781409  658811 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.181669  658811 pod_ready.go:94] pod "kube-scheduler-embed-certs-284604" is "Ready"
	I1124 03:12:54.181696  658811 pod_ready.go:86] duration metric: took 400.263506ms for pod "kube-scheduler-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.181712  658811 pod_ready.go:40] duration metric: took 1.604781402s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:54.228480  658811 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:54.231260  658811 out.go:179] * Done! kubectl is now configured to use "embed-certs-284604" cluster and "default" namespace by default
	W1124 03:12:50.399926  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	W1124 03:12:52.400576  656542 pod_ready.go:104] pod "coredns-66bc5c9577-w62hm" is not "Ready", error: <nil>
	I1124 03:12:52.900171  656542 pod_ready.go:94] pod "coredns-66bc5c9577-w62hm" is "Ready"
	I1124 03:12:52.900193  656542 pod_ready.go:86] duration metric: took 33.505834176s for pod "coredns-66bc5c9577-w62hm" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.903110  656542 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.907513  656542 pod_ready.go:94] pod "etcd-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:52.907539  656542 pod_ready.go:86] duration metric: took 4.401311ms for pod "etcd-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.909400  656542 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.913156  656542 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:52.913178  656542 pod_ready.go:86] duration metric: took 3.755745ms for pod "kube-apiserver-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:52.914951  656542 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.098380  656542 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:53.098409  656542 pod_ready.go:86] duration metric: took 183.435612ms for pod "kube-controller-manager-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.298588  656542 pod_ready.go:83] waiting for pod "kube-proxy-xgjzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.698811  656542 pod_ready.go:94] pod "kube-proxy-xgjzs" is "Ready"
	I1124 03:12:53.698835  656542 pod_ready.go:86] duration metric: took 400.225655ms for pod "kube-proxy-xgjzs" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:53.898023  656542 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.299083  656542 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-993813" is "Ready"
	I1124 03:12:54.299107  656542 pod_ready.go:86] duration metric: took 401.0576ms for pod "kube-scheduler-default-k8s-diff-port-993813" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:54.299119  656542 pod_ready.go:40] duration metric: took 34.911155437s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:54.345901  656542 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:54.347541  656542 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-993813" cluster and "default" namespace by default
	W1124 03:12:52.208247  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	W1124 03:12:54.707505  657716 pod_ready.go:104] pod "coredns-66bc5c9577-9n5xf" is not "Ready", error: <nil>
	I1124 03:12:56.206822  657716 pod_ready.go:94] pod "coredns-66bc5c9577-9n5xf" is "Ready"
	I1124 03:12:56.206857  657716 pod_ready.go:86] duration metric: took 34.50580389s for pod "coredns-66bc5c9577-9n5xf" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.209449  657716 pod_ready.go:83] waiting for pod "etcd-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.213288  657716 pod_ready.go:94] pod "etcd-no-preload-603010" is "Ready"
	I1124 03:12:56.213310  657716 pod_ready.go:86] duration metric: took 3.839555ms for pod "etcd-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.215450  657716 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.219181  657716 pod_ready.go:94] pod "kube-apiserver-no-preload-603010" is "Ready"
	I1124 03:12:56.219201  657716 pod_ready.go:86] duration metric: took 3.726981ms for pod "kube-apiserver-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.221198  657716 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.404873  657716 pod_ready.go:94] pod "kube-controller-manager-no-preload-603010" is "Ready"
	I1124 03:12:56.404930  657716 pod_ready.go:86] duration metric: took 183.709106ms for pod "kube-controller-manager-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:56.605567  657716 pod_ready.go:83] waiting for pod "kube-proxy-swj6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.005571  657716 pod_ready.go:94] pod "kube-proxy-swj6c" is "Ready"
	I1124 03:12:57.005598  657716 pod_ready.go:86] duration metric: took 400.0046ms for pod "kube-proxy-swj6c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.205842  657716 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.605312  657716 pod_ready.go:94] pod "kube-scheduler-no-preload-603010" is "Ready"
	I1124 03:12:57.605336  657716 pod_ready.go:86] duration metric: took 399.465818ms for pod "kube-scheduler-no-preload-603010" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:12:57.605349  657716 pod_ready.go:40] duration metric: took 35.907419342s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:12:57.646839  657716 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:12:57.648681  657716 out.go:179] * Done! kubectl is now configured to use "no-preload-603010" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.613984866Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=4456f3e7-d517-4e61-a09f-74e9fa6d7d66 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.615161698Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn/dashboard-metrics-scraper" id=9e6a4635-dc7f-46f8-8883-69b41e6c3a4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.615321114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.622367741Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.623098142Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.65256355Z" level=info msg="Created container 2a433efd31d3741cd08e9d42cea0074c52a30b507785c1ebf2784f0471b4862c: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn/dashboard-metrics-scraper" id=9e6a4635-dc7f-46f8-8883-69b41e6c3a4a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.653216613Z" level=info msg="Starting container: 2a433efd31d3741cd08e9d42cea0074c52a30b507785c1ebf2784f0471b4862c" id=03fc8b25-840f-4739-b46e-d17f820c6995 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.655371066Z" level=info msg="Started container" PID=1733 containerID=2a433efd31d3741cd08e9d42cea0074c52a30b507785c1ebf2784f0471b4862c description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn/dashboard-metrics-scraper id=03fc8b25-840f-4739-b46e-d17f820c6995 name=/runtime.v1.RuntimeService/StartContainer sandboxID=48627d619904c30a6f412d1072a7c5ed911c07848137b64f48e6c3a4c488f8d1
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.778250175Z" level=info msg="Removing container: 33de6c303c7bcf87d793b286609d8ff37b137005d9e66912668b7f5ab22d4a1e" id=24115c28-af42-4be4-a391-de8341d52be2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:12:52 no-preload-603010 crio[579]: time="2025-11-24T03:12:52.790511351Z" level=info msg="Removed container 33de6c303c7bcf87d793b286609d8ff37b137005d9e66912668b7f5ab22d4a1e: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn/dashboard-metrics-scraper" id=24115c28-af42-4be4-a391-de8341d52be2 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.638642158Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.644770588Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.644793532Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.644809743Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.652285018Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.652307627Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.652325258Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.658197571Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.658217717Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.658232766Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.663119394Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.663144221Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.663162319Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.668760814Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 03:13:01 no-preload-603010 crio[579]: time="2025-11-24T03:13:01.668779201Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2a433efd31d37       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   48627d619904c       dashboard-metrics-scraper-6ffb444bf9-2j8cn   kubernetes-dashboard
	2a23c4740fd8a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   93d06eb70c021       storage-provisioner                          kube-system
	e35e4778c80df       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   44 seconds ago      Running             kubernetes-dashboard        0                   a5a6763732c2c       kubernetes-dashboard-855c9754f9-sfsh5        kubernetes-dashboard
	05af58f5afef5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   9ce75a73ca5a2       busybox                                      default
	3d408a41820da       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   b2a5d8667c2cc       coredns-66bc5c9577-9n5xf                     kube-system
	0538658dae8ee       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   48ee827cfee63       kindnet-7gvgm                                kube-system
	ba401cc056a95       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   e46f4ece9ce0f       kube-proxy-swj6c                             kube-system
	3072e8ebabeb4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   93d06eb70c021       storage-provisioner                          kube-system
	3dc1c0625a30c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   6e3b084fcb2a4       kube-controller-manager-no-preload-603010    kube-system
	4e8e84f339bed       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   b64e9bd1664f4       etcd-no-preload-603010                       kube-system
	7294ac6825ca8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   bb95d9f85adce       kube-scheduler-no-preload-603010             kube-system
	767a9908e7593       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   d0553787479da       kube-apiserver-no-preload-603010             kube-system
	
	
	==> coredns [3d408a41820da3c6cec44b2639564b549a6b0a8af9e865107309ce3c569dd8b2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59195 - 55010 "HINFO IN 1408894213094709921.4509357215920153716. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.509455186s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-603010
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-603010
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=no-preload-603010
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_11_18_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:11:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-603010
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:13:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:12:50 +0000   Mon, 24 Nov 2025 03:11:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:12:50 +0000   Mon, 24 Nov 2025 03:11:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:12:50 +0000   Mon, 24 Nov 2025 03:11:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:12:50 +0000   Mon, 24 Nov 2025 03:11:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-603010
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                1b59d48b-7e38-42b7-9a74-cd736c856d5f
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-9n5xf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-no-preload-603010                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-7gvgm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-603010              250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-no-preload-603010     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-swj6c                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-603010              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-2j8cn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-sfsh5         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 110s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m2s (x8 over 2m2s)  kubelet          Node no-preload-603010 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s (x8 over 2m2s)  kubelet          Node no-preload-603010 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s (x8 over 2m2s)  kubelet          Node no-preload-603010 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node no-preload-603010 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node no-preload-603010 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node no-preload-603010 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                 node-controller  Node no-preload-603010 event: Registered Node no-preload-603010 in Controller
	  Normal  NodeReady                98s                  kubelet          Node no-preload-603010 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node no-preload-603010 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node no-preload-603010 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node no-preload-603010 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                  node-controller  Node no-preload-603010 event: Registered Node no-preload-603010 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [4e8e84f339bed2139577a34b2e6715ab1a8d9a3b425a16886245d61781115cf0] <==
	{"level":"warn","ts":"2025-11-24T03:12:19.085170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.092672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.103049Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.115366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.131549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.138417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.151346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.160504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.175727Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.182813Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.191984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.201405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.237600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.244715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.262564Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.269639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.287749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.292138Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.310039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:12:19.379668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46178","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T03:12:20.539724Z","caller":"traceutil/trace.go:172","msg":"trace[149637593] transaction","detail":"{read_only:false; response_revision:477; number_of_response:1; }","duration":"148.113999ms","start":"2025-11-24T03:12:20.391595Z","end":"2025-11-24T03:12:20.539709Z","steps":["trace[149637593] 'process raft request'  (duration: 112.11995ms)","trace[149637593] 'compare'  (duration: 35.884371ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T03:12:20.539725Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.935509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/system:persistent-volume-provisioner\" limit:1 ","response":"range_response_count:1 size:1137"}
	{"level":"info","ts":"2025-11-24T03:12:20.539790Z","caller":"traceutil/trace.go:172","msg":"trace[1011386953] range","detail":"{range_begin:/registry/roles/kube-system/system:persistent-volume-provisioner; range_end:; response_count:1; response_revision:476; }","duration":"135.018825ms","start":"2025-11-24T03:12:20.404759Z","end":"2025-11-24T03:12:20.539777Z","steps":["trace[1011386953] 'agreement among raft nodes before linearized reading'  (duration: 98.901475ms)","trace[1011386953] 'range keys from in-memory index tree'  (duration: 35.938152ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:20.873490Z","caller":"traceutil/trace.go:172","msg":"trace[1619096431] transaction","detail":"{read_only:false; response_revision:484; number_of_response:1; }","duration":"158.121397ms","start":"2025-11-24T03:12:20.715348Z","end":"2025-11-24T03:12:20.873470Z","steps":["trace[1619096431] 'process raft request'  (duration: 70.181338ms)","trace[1619096431] 'compare'  (duration: 87.728884ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T03:12:20.873494Z","caller":"traceutil/trace.go:172","msg":"trace[918360513] transaction","detail":"{read_only:false; response_revision:485; number_of_response:1; }","duration":"154.944282ms","start":"2025-11-24T03:12:20.718524Z","end":"2025-11-24T03:12:20.873469Z","steps":["trace[918360513] 'process raft request'  (duration: 154.86791ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:13:14 up  1:55,  0 user,  load average: 4.55, 4.15, 2.73
	Linux no-preload-603010 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0538658dae8eeb1e72082ae5de429b78aaf9874931620b324b5b39bcd20d564e] <==
	I1124 03:12:21.438141       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:12:21.438414       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 03:12:21.438639       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:12:21.438662       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:12:21.438685       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:12:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:12:21.637882       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:12:21.637960       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:12:21.637972       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:12:21.735030       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 03:12:51.639025       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 03:12:51.639024       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 03:12:51.639027       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 03:12:51.639039       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1124 03:12:52.941229       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:12:52.941274       1 metrics.go:72] Registering metrics
	I1124 03:12:52.941387       1 controller.go:711] "Syncing nftables rules"
	I1124 03:13:01.638326       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:13:01.638405       1 main.go:301] handling current node
	I1124 03:13:11.646014       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:13:11.646061       1 main.go:301] handling current node
	
	
	==> kube-apiserver [767a9908e75937581bb6f9fd527e760a401658cde3d3cf28bc2b66613eab1027] <==
	I1124 03:12:19.995274       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 03:12:19.995648       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 03:12:19.995719       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 03:12:19.995728       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 03:12:19.997552       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 03:12:19.997764       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 03:12:19.997817       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 03:12:19.998803       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 03:12:20.001492       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 03:12:20.001546       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:12:20.011923       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 03:12:20.014456       1 policy_source.go:240] refreshing policies
	I1124 03:12:20.015462       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 03:12:20.067718       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:12:20.377482       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:12:20.612179       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:12:20.668057       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:12:20.714749       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:12:20.881605       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:12:21.012244       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:12:21.107002       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.108.105.77"}
	I1124 03:12:21.147442       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.165.131"}
	I1124 03:12:23.729659       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:12:23.779823       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:12:23.880590       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [3dc1c0625a30c329549c40b7e0c43d1ab4d818ccc0fb85ea668066af23a327d3] <==
	I1124 03:12:23.320942       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:12:23.325395       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:12:23.325411       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:12:23.325422       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:12:23.325550       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 03:12:23.325818       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:12:23.325834       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:12:23.325952       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:12:23.325963       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:12:23.326461       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 03:12:23.326654       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:12:23.327067       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:12:23.327194       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:12:23.327241       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:12:23.330145       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:12:23.330987       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:12:23.332178       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:12:23.334506       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:12:23.334508       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 03:12:23.336698       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 03:12:23.338947       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 03:12:23.338961       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 03:12:23.339037       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 03:12:23.340210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 03:12:23.359511       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [ba401cc056a953c5699c15cbf074185bee5218833058db0fed286d0270ae02ba] <==
	I1124 03:12:21.227389       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:12:21.301440       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:12:21.402435       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:12:21.402559       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 03:12:21.402732       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:12:21.426279       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:12:21.426404       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:12:21.431844       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:12:21.432292       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:12:21.432335       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:12:21.433861       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:12:21.433909       1 config.go:200] "Starting service config controller"
	I1124 03:12:21.433919       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:12:21.433923       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:12:21.433945       1 config.go:309] "Starting node config controller"
	I1124 03:12:21.433952       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:12:21.433953       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:12:21.433959       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:12:21.534914       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:12:21.534934       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:12:21.534924       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:12:21.534981       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [7294ac6825ca8afcd936803374aa461bcb4d637ea8743600784037c5acb225b2] <==
	I1124 03:12:17.080470       1 serving.go:386] Generated self-signed cert in-memory
	W1124 03:12:19.973585       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 03:12:19.973626       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found]
	W1124 03:12:19.973642       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 03:12:19.973651       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 03:12:20.014247       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 03:12:20.014284       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:12:20.018512       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 03:12:20.019133       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:12:20.019212       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:12:20.019271       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:12:20.119768       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:12:23 no-preload-603010 kubelet[718]: I1124 03:12:23.986006     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/759895bc-23f3-4a43-b1a5-2a34cb7593bc-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-2j8cn\" (UID: \"759895bc-23f3-4a43-b1a5-2a34cb7593bc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn"
	Nov 24 03:12:23 no-preload-603010 kubelet[718]: I1124 03:12:23.986058     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4cqh\" (UniqueName: \"kubernetes.io/projected/4271eb57-8093-4453-8aad-0faa0f0d1c1e-kube-api-access-c4cqh\") pod \"kubernetes-dashboard-855c9754f9-sfsh5\" (UID: \"4271eb57-8093-4453-8aad-0faa0f0d1c1e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sfsh5"
	Nov 24 03:12:23 no-preload-603010 kubelet[718]: I1124 03:12:23.986182     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4271eb57-8093-4453-8aad-0faa0f0d1c1e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-sfsh5\" (UID: \"4271eb57-8093-4453-8aad-0faa0f0d1c1e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sfsh5"
	Nov 24 03:12:23 no-preload-603010 kubelet[718]: I1124 03:12:23.986249     718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vxch\" (UniqueName: \"kubernetes.io/projected/759895bc-23f3-4a43-b1a5-2a34cb7593bc-kube-api-access-4vxch\") pod \"dashboard-metrics-scraper-6ffb444bf9-2j8cn\" (UID: \"759895bc-23f3-4a43-b1a5-2a34cb7593bc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn"
	Nov 24 03:12:25 no-preload-603010 kubelet[718]: I1124 03:12:25.738115     718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 03:12:31 no-preload-603010 kubelet[718]: I1124 03:12:31.489969     718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-sfsh5" podStartSLOduration=3.366297844 podStartE2EDuration="8.48987631s" podCreationTimestamp="2025-11-24 03:12:23 +0000 UTC" firstStartedPulling="2025-11-24 03:12:24.200304259 +0000 UTC m=+8.729436688" lastFinishedPulling="2025-11-24 03:12:29.323882724 +0000 UTC m=+13.853015154" observedRunningTime="2025-11-24 03:12:29.736222431 +0000 UTC m=+14.265354896" watchObservedRunningTime="2025-11-24 03:12:31.48987631 +0000 UTC m=+16.019008751"
	Nov 24 03:12:32 no-preload-603010 kubelet[718]: I1124 03:12:32.724089     718 scope.go:117] "RemoveContainer" containerID="d37d22bd32705cdf7290134d2fef83db23d75f3fdd279150bb28ca47b472c963"
	Nov 24 03:12:33 no-preload-603010 kubelet[718]: I1124 03:12:33.727773     718 scope.go:117] "RemoveContainer" containerID="d37d22bd32705cdf7290134d2fef83db23d75f3fdd279150bb28ca47b472c963"
	Nov 24 03:12:33 no-preload-603010 kubelet[718]: I1124 03:12:33.727976     718 scope.go:117] "RemoveContainer" containerID="33de6c303c7bcf87d793b286609d8ff37b137005d9e66912668b7f5ab22d4a1e"
	Nov 24 03:12:33 no-preload-603010 kubelet[718]: E1124 03:12:33.728179     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2j8cn_kubernetes-dashboard(759895bc-23f3-4a43-b1a5-2a34cb7593bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn" podUID="759895bc-23f3-4a43-b1a5-2a34cb7593bc"
	Nov 24 03:12:34 no-preload-603010 kubelet[718]: I1124 03:12:34.731327     718 scope.go:117] "RemoveContainer" containerID="33de6c303c7bcf87d793b286609d8ff37b137005d9e66912668b7f5ab22d4a1e"
	Nov 24 03:12:34 no-preload-603010 kubelet[718]: E1124 03:12:34.731535     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2j8cn_kubernetes-dashboard(759895bc-23f3-4a43-b1a5-2a34cb7593bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn" podUID="759895bc-23f3-4a43-b1a5-2a34cb7593bc"
	Nov 24 03:12:40 no-preload-603010 kubelet[718]: I1124 03:12:40.793558     718 scope.go:117] "RemoveContainer" containerID="33de6c303c7bcf87d793b286609d8ff37b137005d9e66912668b7f5ab22d4a1e"
	Nov 24 03:12:40 no-preload-603010 kubelet[718]: E1124 03:12:40.793705     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2j8cn_kubernetes-dashboard(759895bc-23f3-4a43-b1a5-2a34cb7593bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn" podUID="759895bc-23f3-4a43-b1a5-2a34cb7593bc"
	Nov 24 03:12:51 no-preload-603010 kubelet[718]: I1124 03:12:51.771946     718 scope.go:117] "RemoveContainer" containerID="3072e8ebabeb4373de4efeab47db549507d3ee4e0654e8677138ab8f8c18ece3"
	Nov 24 03:12:52 no-preload-603010 kubelet[718]: I1124 03:12:52.612094     718 scope.go:117] "RemoveContainer" containerID="33de6c303c7bcf87d793b286609d8ff37b137005d9e66912668b7f5ab22d4a1e"
	Nov 24 03:12:52 no-preload-603010 kubelet[718]: I1124 03:12:52.776915     718 scope.go:117] "RemoveContainer" containerID="33de6c303c7bcf87d793b286609d8ff37b137005d9e66912668b7f5ab22d4a1e"
	Nov 24 03:12:52 no-preload-603010 kubelet[718]: I1124 03:12:52.777246     718 scope.go:117] "RemoveContainer" containerID="2a433efd31d3741cd08e9d42cea0074c52a30b507785c1ebf2784f0471b4862c"
	Nov 24 03:12:52 no-preload-603010 kubelet[718]: E1124 03:12:52.777502     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2j8cn_kubernetes-dashboard(759895bc-23f3-4a43-b1a5-2a34cb7593bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn" podUID="759895bc-23f3-4a43-b1a5-2a34cb7593bc"
	Nov 24 03:13:00 no-preload-603010 kubelet[718]: I1124 03:13:00.792998     718 scope.go:117] "RemoveContainer" containerID="2a433efd31d3741cd08e9d42cea0074c52a30b507785c1ebf2784f0471b4862c"
	Nov 24 03:13:00 no-preload-603010 kubelet[718]: E1124 03:13:00.793151     718 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-2j8cn_kubernetes-dashboard(759895bc-23f3-4a43-b1a5-2a34cb7593bc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-2j8cn" podUID="759895bc-23f3-4a43-b1a5-2a34cb7593bc"
	Nov 24 03:13:09 no-preload-603010 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 03:13:09 no-preload-603010 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 03:13:09 no-preload-603010 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 03:13:09 no-preload-603010 systemd[1]: kubelet.service: Consumed 1.568s CPU time.
	
	
	==> kubernetes-dashboard [e35e4778c80df433ced61266b491a3bff7391fc67271709f5ef3f7509c962a42] <==
	2025/11/24 03:12:29 Starting overwatch
	2025/11/24 03:12:29 Using namespace: kubernetes-dashboard
	2025/11/24 03:12:29 Using in-cluster config to connect to apiserver
	2025/11/24 03:12:29 Using secret token for csrf signing
	2025/11/24 03:12:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 03:12:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 03:12:29 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 03:12:29 Generating JWE encryption key
	2025/11/24 03:12:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 03:12:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 03:12:30 Initializing JWE encryption key from synchronized object
	2025/11/24 03:12:30 Creating in-cluster Sidecar client
	2025/11/24 03:12:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 03:12:30 Serving insecurely on HTTP port: 9090
	2025/11/24 03:13:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [2a23c4740fd8a0b86f68bdde06ff7fc26aef5bd492c29ae3555a8b8bd1103d39] <==
	I1124 03:12:51.820815       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:12:51.828620       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:12:51.828667       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:12:51.831063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:55.285686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:12:59.545387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:03.143498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:06.197193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:09.219413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:09.223647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:13:09.223811       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:13:09.223934       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-603010_d585a338-9708-4771-a89c-fcd3d1b04230!
	I1124 03:13:09.223927       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a5b777d2-712b-44e4-a3bf-a14213c57432", APIVersion:"v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-603010_d585a338-9708-4771-a89c-fcd3d1b04230 became leader
	W1124 03:13:09.225846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:09.228570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:13:09.324240       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-603010_d585a338-9708-4771-a89c-fcd3d1b04230!
	W1124 03:13:11.231500       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:11.236133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:13.239179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:13:13.243548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [3072e8ebabeb4373de4efeab47db549507d3ee4e0654e8677138ab8f8c18ece3] <==
	I1124 03:12:21.119565       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 03:12:51.131024       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-603010 -n no-preload-603010
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-603010 -n no-preload-603010: exit status 2 (334.951638ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-603010 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-284604 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-284604 --alsologtostderr -v=1: exit status 80 (1.646791347s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-284604 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:14:20.360590  676517 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:14:20.361074  676517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:14:20.361086  676517 out.go:374] Setting ErrFile to fd 2...
	I1124 03:14:20.361090  676517 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:14:20.361291  676517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:14:20.361522  676517 out.go:368] Setting JSON to false
	I1124 03:14:20.361546  676517 mustload.go:66] Loading cluster: embed-certs-284604
	I1124 03:14:20.361958  676517 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:14:20.362322  676517 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:14:20.380372  676517 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:14:20.380640  676517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:14:20.437935  676517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-24 03:14:20.427123157 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:14:20.438567  676517 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21975/minikube-v1.37.0-1763935228-21975-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763935228-21975/minikube-v1.37.0-1763935228-21975-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763935228-21975-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-284604 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 03:14:20.440352  676517 out.go:179] * Pausing node embed-certs-284604 ... 
	I1124 03:14:20.441403  676517 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:14:20.441654  676517 ssh_runner.go:195] Run: systemctl --version
	I1124 03:14:20.441699  676517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:14:20.458365  676517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:14:20.553867  676517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:14:20.565178  676517 pause.go:52] kubelet running: true
	I1124 03:14:20.565242  676517 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:14:20.726221  676517 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:14:20.726321  676517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:14:20.788351  676517 cri.go:89] found id: "7dfe56080d905fccce6df6410252e5de80a8471a616d9694d5005304fc5a8607"
	I1124 03:14:20.788371  676517 cri.go:89] found id: "7c9de32e2a2a1a4bf1dcb74cc732ea9bf8940a38a5255b6219921850e280f953"
	I1124 03:14:20.788376  676517 cri.go:89] found id: "d1fd18ad940c962cf45cd1bcc24444e576f59c99eaf790532d0fef509627de0c"
	I1124 03:14:20.788379  676517 cri.go:89] found id: "7f8f00c980f03b3a444b2377b90496745dbb07c9bd8f9baeb585c8435ae1c9dc"
	I1124 03:14:20.788382  676517 cri.go:89] found id: "e2e368c8131a6bdffbca9bf069eec5b0d46432a4f32e063227f4393352e1c12b"
	I1124 03:14:20.788386  676517 cri.go:89] found id: "bee45aa12c3da24d490c817cac60d2855a72aa70d2a66c610bbc0b141b008dbf"
	I1124 03:14:20.788389  676517 cri.go:89] found id: "bbf02917610133c48abd17535a3d2ae4b7bf5f001204872f0f6c240d1a35d582"
	I1124 03:14:20.788392  676517 cri.go:89] found id: "d2ca966aa30cf5ae6493816c664588714b29eded4d0b36ff92e650b04101b9da"
	I1124 03:14:20.788394  676517 cri.go:89] found id: "76f77f06071348df9904042948e1b1b6506e913800d6148de693cf689f54ff8b"
	I1124 03:14:20.788408  676517 cri.go:89] found id: "8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0"
	I1124 03:14:20.788411  676517 cri.go:89] found id: "c01f29a7b7ec8871b22514ceac1950d3ab216fa11d1e3c795917584a750a2e70"
	I1124 03:14:20.788414  676517 cri.go:89] found id: ""
	I1124 03:14:20.788450  676517 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:14:20.799618  676517 retry.go:31] will retry after 318.954966ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:14:20Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:14:21.119201  676517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:14:21.131362  676517 pause.go:52] kubelet running: false
	I1124 03:14:21.131417  676517 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:14:21.267424  676517 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:14:21.267511  676517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:14:21.328875  676517 cri.go:89] found id: "7dfe56080d905fccce6df6410252e5de80a8471a616d9694d5005304fc5a8607"
	I1124 03:14:21.328910  676517 cri.go:89] found id: "7c9de32e2a2a1a4bf1dcb74cc732ea9bf8940a38a5255b6219921850e280f953"
	I1124 03:14:21.328916  676517 cri.go:89] found id: "d1fd18ad940c962cf45cd1bcc24444e576f59c99eaf790532d0fef509627de0c"
	I1124 03:14:21.328920  676517 cri.go:89] found id: "7f8f00c980f03b3a444b2377b90496745dbb07c9bd8f9baeb585c8435ae1c9dc"
	I1124 03:14:21.328922  676517 cri.go:89] found id: "e2e368c8131a6bdffbca9bf069eec5b0d46432a4f32e063227f4393352e1c12b"
	I1124 03:14:21.328926  676517 cri.go:89] found id: "bee45aa12c3da24d490c817cac60d2855a72aa70d2a66c610bbc0b141b008dbf"
	I1124 03:14:21.328929  676517 cri.go:89] found id: "bbf02917610133c48abd17535a3d2ae4b7bf5f001204872f0f6c240d1a35d582"
	I1124 03:14:21.328934  676517 cri.go:89] found id: "d2ca966aa30cf5ae6493816c664588714b29eded4d0b36ff92e650b04101b9da"
	I1124 03:14:21.328938  676517 cri.go:89] found id: "76f77f06071348df9904042948e1b1b6506e913800d6148de693cf689f54ff8b"
	I1124 03:14:21.328947  676517 cri.go:89] found id: "8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0"
	I1124 03:14:21.328962  676517 cri.go:89] found id: "c01f29a7b7ec8871b22514ceac1950d3ab216fa11d1e3c795917584a750a2e70"
	I1124 03:14:21.328967  676517 cri.go:89] found id: ""
	I1124 03:14:21.329011  676517 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:14:21.340100  676517 retry.go:31] will retry after 364.238705ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:14:21Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:14:21.704626  676517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:14:21.728233  676517 pause.go:52] kubelet running: false
	I1124 03:14:21.728300  676517 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 03:14:21.863505  676517 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 03:14:21.863574  676517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 03:14:21.924576  676517 cri.go:89] found id: "7dfe56080d905fccce6df6410252e5de80a8471a616d9694d5005304fc5a8607"
	I1124 03:14:21.924602  676517 cri.go:89] found id: "7c9de32e2a2a1a4bf1dcb74cc732ea9bf8940a38a5255b6219921850e280f953"
	I1124 03:14:21.924607  676517 cri.go:89] found id: "d1fd18ad940c962cf45cd1bcc24444e576f59c99eaf790532d0fef509627de0c"
	I1124 03:14:21.924612  676517 cri.go:89] found id: "7f8f00c980f03b3a444b2377b90496745dbb07c9bd8f9baeb585c8435ae1c9dc"
	I1124 03:14:21.924617  676517 cri.go:89] found id: "e2e368c8131a6bdffbca9bf069eec5b0d46432a4f32e063227f4393352e1c12b"
	I1124 03:14:21.924633  676517 cri.go:89] found id: "bee45aa12c3da24d490c817cac60d2855a72aa70d2a66c610bbc0b141b008dbf"
	I1124 03:14:21.924636  676517 cri.go:89] found id: "bbf02917610133c48abd17535a3d2ae4b7bf5f001204872f0f6c240d1a35d582"
	I1124 03:14:21.924638  676517 cri.go:89] found id: "d2ca966aa30cf5ae6493816c664588714b29eded4d0b36ff92e650b04101b9da"
	I1124 03:14:21.924641  676517 cri.go:89] found id: "76f77f06071348df9904042948e1b1b6506e913800d6148de693cf689f54ff8b"
	I1124 03:14:21.924648  676517 cri.go:89] found id: "8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0"
	I1124 03:14:21.924654  676517 cri.go:89] found id: "c01f29a7b7ec8871b22514ceac1950d3ab216fa11d1e3c795917584a750a2e70"
	I1124 03:14:21.924657  676517 cri.go:89] found id: ""
	I1124 03:14:21.924693  676517 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 03:14:21.937832  676517 out.go:203] 
	W1124 03:14:21.939150  676517 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:14:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:14:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 03:14:21.939165  676517 out.go:285] * 
	* 
	W1124 03:14:21.943896  676517 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 03:14:21.945148  676517 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-284604 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-284604
helpers_test.go:243: (dbg) docker inspect embed-certs-284604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa",
	        "Created": "2025-11-24T03:12:13.144496823Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 673971,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:13:21.236174017Z",
	            "FinishedAt": "2025-11-24T03:13:20.43779445Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa/hostname",
	        "HostsPath": "/var/lib/docker/containers/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa/hosts",
	        "LogPath": "/var/lib/docker/containers/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa-json.log",
	        "Name": "/embed-certs-284604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-284604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-284604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa",
	                "LowerDir": "/var/lib/docker/overlay2/af574c392758f969970a6326011c4ed57be640b8c05a9d111cafa150f3cb303b-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af574c392758f969970a6326011c4ed57be640b8c05a9d111cafa150f3cb303b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af574c392758f969970a6326011c4ed57be640b8c05a9d111cafa150f3cb303b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af574c392758f969970a6326011c4ed57be640b8c05a9d111cafa150f3cb303b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-284604",
	                "Source": "/var/lib/docker/volumes/embed-certs-284604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-284604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-284604",
	                "name.minikube.sigs.k8s.io": "embed-certs-284604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6d853c450794ba7800c4bfd667592ac27604a35945845afdcb76eae0f6b44d03",
	            "SandboxKey": "/var/run/docker/netns/6d853c450794",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-284604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1d9fd759284ca1283df730e0f7d581869748db9e3cd1619451e948defda88535",
	                    "EndpointID": "a4a40d4b817ffe8f5454c7ea15cda79603736ee2cbd74c8a04e9d5f8c45249af",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "0a:53:04:2a:d6:89",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-284604",
	                        "65dda7ef92bd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-284604 -n embed-certs-284604
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-284604 -n embed-certs-284604: exit status 2 (316.840742ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-284604 logs -n 25
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p no-preload-603010 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                     │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p no-preload-603010 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                     │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p disable-driver-mounts-242597                                                                                                                                          │ disable-driver-mounts-242597 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ image   │ old-k8s-version-579951 image list --format=json                                                                                                                          │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p old-k8s-version-579951 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ -p old-k8s-version-579951                                                                                                                                                │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p old-k8s-version-579951                                                                                                                                                │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable metrics-server -p embed-certs-284604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ stop    │ -p embed-certs-284604 --alsologtostderr -v=3                                                                                                                             │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ image   │ default-k8s-diff-port-993813 image list --format=json                                                                                                                    │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ pause   │ -p default-k8s-diff-port-993813 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ image   │ no-preload-603010 image list --format=json                                                                                                                               │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ pause   │ -p no-preload-603010 --alsologtostderr -v=1                                                                                                                              │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-993813                                                                                                                                          │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ delete  │ -p default-k8s-diff-port-993813                                                                                                                                          │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ delete  │ -p no-preload-603010                                                                                                                                                     │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ delete  │ -p no-preload-603010                                                                                                                                                     │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ addons  │ enable dashboard -p embed-certs-284604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:14 UTC │
	│ image   │ embed-certs-284604 image list --format=json                                                                                                                              │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ pause   │ -p embed-certs-284604 --alsologtostderr -v=1                                                                                                                             │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:13:21
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:13:21.023697  673767 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:13:21.023794  673767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:21.023805  673767 out.go:374] Setting ErrFile to fd 2...
	I1124 03:13:21.023810  673767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:21.024024  673767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:13:21.024485  673767 out.go:368] Setting JSON to false
	I1124 03:13:21.025474  673767 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6948,"bootTime":1763947053,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:13:21.025524  673767 start.go:143] virtualization: kvm guest
	I1124 03:13:21.027092  673767 out.go:179] * [embed-certs-284604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:13:21.028207  673767 notify.go:221] Checking for updates...
	I1124 03:13:21.028214  673767 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:13:21.029309  673767 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:13:21.030380  673767 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:13:21.031472  673767 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:13:21.032575  673767 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:13:21.033601  673767 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:13:21.034917  673767 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:13:21.035435  673767 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:13:21.058427  673767 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:13:21.058540  673767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:21.112477  673767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-24 03:13:21.102640806 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:13:21.112584  673767 docker.go:319] overlay module found
	I1124 03:13:21.113967  673767 out.go:179] * Using the docker driver based on existing profile
	I1124 03:13:21.115143  673767 start.go:309] selected driver: docker
	I1124 03:13:21.115157  673767 start.go:927] validating driver "docker" against &{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:21.115246  673767 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:13:21.115798  673767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:21.171790  673767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-24 03:13:21.163161885 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:13:21.172086  673767 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:13:21.172125  673767 cni.go:84] Creating CNI manager for ""
	I1124 03:13:21.172199  673767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:13:21.172243  673767 start.go:353] cluster config:
	{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:21.173815  673767 out.go:179] * Starting "embed-certs-284604" primary control-plane node in "embed-certs-284604" cluster
	I1124 03:13:21.174821  673767 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:13:21.175933  673767 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:13:21.176985  673767 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:13:21.177015  673767 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:13:21.177026  673767 cache.go:65] Caching tarball of preloaded images
	I1124 03:13:21.177061  673767 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:13:21.177120  673767 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:13:21.177135  673767 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:13:21.177243  673767 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:13:21.196030  673767 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:13:21.196045  673767 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:13:21.196059  673767 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:13:21.196093  673767 start.go:360] acquireMachinesLock for embed-certs-284604: {Name:mkd39be5908e1d289ed5af40b6c2b1c510beffd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:21.196158  673767 start.go:364] duration metric: took 36.894µs to acquireMachinesLock for "embed-certs-284604"
	I1124 03:13:21.196174  673767 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:13:21.196181  673767 fix.go:54] fixHost starting: 
	I1124 03:13:21.196382  673767 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:13:21.212338  673767 fix.go:112] recreateIfNeeded on embed-certs-284604: state=Stopped err=<nil>
	W1124 03:13:21.212359  673767 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:13:21.213903  673767 out.go:252] * Restarting existing docker container for "embed-certs-284604" ...
	I1124 03:13:21.214003  673767 cli_runner.go:164] Run: docker start embed-certs-284604
	I1124 03:13:21.458544  673767 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:13:21.477043  673767 kic.go:430] container "embed-certs-284604" state is running.
	I1124 03:13:21.477399  673767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:13:21.494001  673767 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:13:21.494192  673767 machine.go:94] provisionDockerMachine start ...
	I1124 03:13:21.494272  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:21.511591  673767 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:21.511856  673767 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I1124 03:13:21.511873  673767 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:13:21.512614  673767 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49324->127.0.0.1:33503: read: connection reset by peer
	I1124 03:13:24.649100  673767 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:13:24.649139  673767 ubuntu.go:182] provisioning hostname "embed-certs-284604"
	I1124 03:13:24.649210  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:24.667549  673767 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:24.667764  673767 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I1124 03:13:24.667777  673767 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-284604 && echo "embed-certs-284604" | sudo tee /etc/hostname
	I1124 03:13:24.809983  673767 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:13:24.810072  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:24.827292  673767 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:24.827553  673767 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I1124 03:13:24.827580  673767 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-284604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-284604/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-284604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:13:24.961631  673767 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:13:24.961659  673767 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:13:24.961707  673767 ubuntu.go:190] setting up certificates
	I1124 03:13:24.961729  673767 provision.go:84] configureAuth start
	I1124 03:13:24.961781  673767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:13:24.978993  673767 provision.go:143] copyHostCerts
	I1124 03:13:24.979052  673767 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:13:24.979069  673767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:13:24.979133  673767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:13:24.979228  673767 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:13:24.979243  673767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:13:24.979270  673767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:13:24.979338  673767 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:13:24.979346  673767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:13:24.979370  673767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:13:24.979434  673767 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.embed-certs-284604 san=[127.0.0.1 192.168.94.2 embed-certs-284604 localhost minikube]
	I1124 03:13:25.132758  673767 provision.go:177] copyRemoteCerts
	I1124 03:13:25.132818  673767 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:13:25.132866  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:25.149705  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:25.247440  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 03:13:25.264014  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:13:25.279874  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:13:25.295812  673767 provision.go:87] duration metric: took 334.070361ms to configureAuth
	I1124 03:13:25.295837  673767 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:13:25.296027  673767 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:13:25.296134  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:25.312742  673767 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:25.312959  673767 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I1124 03:13:25.312983  673767 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:13:25.617219  673767 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:13:25.617246  673767 machine.go:97] duration metric: took 4.123039203s to provisionDockerMachine
	I1124 03:13:25.617260  673767 start.go:293] postStartSetup for "embed-certs-284604" (driver="docker")
	I1124 03:13:25.617275  673767 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:13:25.617343  673767 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:13:25.617386  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:25.637174  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:25.734461  673767 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:13:25.737722  673767 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:13:25.737743  673767 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:13:25.737752  673767 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:13:25.737801  673767 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:13:25.737878  673767 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:13:25.737994  673767 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:13:25.745077  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:13:25.761050  673767 start.go:296] duration metric: took 143.775494ms for postStartSetup
	I1124 03:13:25.761108  673767 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:13:25.761139  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:25.778294  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:25.872298  673767 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:13:25.876609  673767 fix.go:56] duration metric: took 4.680421261s for fixHost
	I1124 03:13:25.876634  673767 start.go:83] releasing machines lock for "embed-certs-284604", held for 4.680464995s
	I1124 03:13:25.876703  673767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:13:25.893223  673767 ssh_runner.go:195] Run: cat /version.json
	I1124 03:13:25.893295  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:25.893354  673767 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:13:25.893434  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:25.910903  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:25.911287  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:26.005512  673767 ssh_runner.go:195] Run: systemctl --version
	I1124 03:13:26.078619  673767 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:13:26.111985  673767 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:13:26.116370  673767 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:13:26.116427  673767 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:13:26.123998  673767 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:13:26.124020  673767 start.go:496] detecting cgroup driver to use...
	I1124 03:13:26.124049  673767 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:13:26.124100  673767 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:13:26.137366  673767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:13:26.148272  673767 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:13:26.148328  673767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:13:26.161121  673767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:13:26.172030  673767 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:13:26.247968  673767 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:13:26.325591  673767 docker.go:234] disabling docker service ...
	I1124 03:13:26.325642  673767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:13:26.338520  673767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:13:26.349463  673767 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:13:26.426193  673767 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:13:26.501999  673767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:13:26.512931  673767 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:13:26.525655  673767 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:13:26.525703  673767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:13:26.533624  673767 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:13:26.533666  673767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:13:26.541730  673767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:13:26.549536  673767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:13:26.557411  673767 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:13:26.564562  673767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:13:26.572348  673767 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:13:26.579652  673767 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:13:26.587376  673767 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:13:26.593913  673767 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:13:26.600541  673767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:26.674417  673767 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:13:26.805721  673767 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:13:26.805778  673767 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:13:26.809498  673767 start.go:564] Will wait 60s for crictl version
	I1124 03:13:26.809545  673767 ssh_runner.go:195] Run: which crictl
	I1124 03:13:26.812755  673767 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:13:26.835988  673767 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:13:26.836067  673767 ssh_runner.go:195] Run: crio --version
	I1124 03:13:26.863207  673767 ssh_runner.go:195] Run: crio --version
	I1124 03:13:26.890909  673767 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:13:26.892032  673767 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:13:26.909903  673767 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:13:26.913595  673767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:13:26.923532  673767 kubeadm.go:884] updating cluster {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:13:26.923632  673767 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:13:26.923674  673767 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:13:26.953820  673767 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:13:26.953838  673767 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:13:26.953879  673767 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:13:26.976967  673767 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:13:26.976985  673767 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:13:26.976993  673767 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:13:26.977087  673767 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-284604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:13:26.977140  673767 ssh_runner.go:195] Run: crio config
	I1124 03:13:27.020706  673767 cni.go:84] Creating CNI manager for ""
	I1124 03:13:27.020724  673767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:13:27.020740  673767 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:13:27.020760  673767 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-284604 NodeName:embed-certs-284604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:13:27.020880  673767 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-284604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:13:27.020953  673767 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:13:27.028655  673767 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:13:27.028709  673767 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:13:27.035781  673767 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 03:13:27.047656  673767 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:13:27.058850  673767 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 03:13:27.070310  673767 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:13:27.073499  673767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:13:27.082423  673767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:27.160803  673767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:13:27.184692  673767 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604 for IP: 192.168.94.2
	I1124 03:13:27.184711  673767 certs.go:195] generating shared ca certs ...
	I1124 03:13:27.184732  673767 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:27.184905  673767 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:13:27.184986  673767 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:13:27.185004  673767 certs.go:257] generating profile certs ...
	I1124 03:13:27.185145  673767 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key
	I1124 03:13:27.185238  673767 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087
	I1124 03:13:27.185290  673767 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key
	I1124 03:13:27.185387  673767 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:13:27.185417  673767 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:13:27.185430  673767 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:13:27.185456  673767 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:13:27.185481  673767 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:13:27.185502  673767 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:13:27.185543  673767 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:13:27.186095  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:13:27.203654  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:13:27.220660  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:13:27.238775  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:13:27.263708  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:13:27.280354  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:13:27.295972  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:13:27.311550  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:13:27.327064  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:13:27.342758  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:13:27.358487  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:13:27.375141  673767 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:13:27.386596  673767 ssh_runner.go:195] Run: openssl version
	I1124 03:13:27.392118  673767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:13:27.399836  673767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:13:27.403114  673767 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:13:27.403159  673767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:13:27.436458  673767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:13:27.443327  673767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:13:27.451223  673767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:13:27.454519  673767 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:13:27.454562  673767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:13:27.487685  673767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:13:27.494693  673767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:13:27.502120  673767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:27.505417  673767 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:27.505459  673767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:27.538267  673767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:13:27.545261  673767 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:13:27.548661  673767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:13:27.582051  673767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:13:27.614279  673767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:13:27.646902  673767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:13:27.682043  673767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:13:27.726407  673767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:13:27.779794  673767 kubeadm.go:401] StartCluster: {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:27.779907  673767 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:13:27.779990  673767 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:13:27.816976  673767 cri.go:89] found id: "bee45aa12c3da24d490c817cac60d2855a72aa70d2a66c610bbc0b141b008dbf"
	I1124 03:13:27.817008  673767 cri.go:89] found id: "bbf02917610133c48abd17535a3d2ae4b7bf5f001204872f0f6c240d1a35d582"
	I1124 03:13:27.817014  673767 cri.go:89] found id: "d2ca966aa30cf5ae6493816c664588714b29eded4d0b36ff92e650b04101b9da"
	I1124 03:13:27.817028  673767 cri.go:89] found id: "76f77f06071348df9904042948e1b1b6506e913800d6148de693cf689f54ff8b"
	I1124 03:13:27.817036  673767 cri.go:89] found id: ""
	I1124 03:13:27.817087  673767 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:13:27.831032  673767 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:13:27Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:13:27.831100  673767 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:13:27.839437  673767 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:13:27.839455  673767 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:13:27.839507  673767 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:13:27.846602  673767 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:13:27.847027  673767 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-284604" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:13:27.847136  673767 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-284604" cluster setting kubeconfig missing "embed-certs-284604" context setting]
	I1124 03:13:27.847407  673767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:27.848629  673767 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:13:27.856238  673767 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1124 03:13:27.856279  673767 kubeadm.go:602] duration metric: took 16.81619ms to restartPrimaryControlPlane
	I1124 03:13:27.856289  673767 kubeadm.go:403] duration metric: took 76.504087ms to StartCluster
	I1124 03:13:27.856305  673767 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:27.856405  673767 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:13:27.857521  673767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:27.857762  673767 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:13:27.857806  673767 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:13:27.857876  673767 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-284604"
	I1124 03:13:27.857900  673767 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-284604"
	W1124 03:13:27.857910  673767 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:13:27.857940  673767 addons.go:70] Setting dashboard=true in profile "embed-certs-284604"
	I1124 03:13:27.857952  673767 addons.go:70] Setting default-storageclass=true in profile "embed-certs-284604"
	I1124 03:13:27.857958  673767 addons.go:239] Setting addon dashboard=true in "embed-certs-284604"
	W1124 03:13:27.857966  673767 addons.go:248] addon dashboard should already be in state true
	I1124 03:13:27.857971  673767 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:13:27.857980  673767 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-284604"
	I1124 03:13:27.857992  673767 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:13:27.857945  673767 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:13:27.858227  673767 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:13:27.858476  673767 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:13:27.858548  673767 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:13:27.860704  673767 out.go:179] * Verifying Kubernetes components...
	I1124 03:13:27.861812  673767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:27.883991  673767 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:13:27.884065  673767 addons.go:239] Setting addon default-storageclass=true in "embed-certs-284604"
	W1124 03:13:27.884086  673767 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:13:27.884117  673767 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:13:27.884000  673767 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:13:27.884620  673767 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:13:27.885405  673767 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:13:27.885426  673767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:13:27.885480  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:27.886354  673767 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:13:27.887255  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:13:27.887268  673767 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:13:27.887323  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:27.921423  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:27.921731  673767 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:13:27.921752  673767 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:13:27.921807  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:27.923788  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:27.947653  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:27.999321  673767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:13:28.011628  673767 node_ready.go:35] waiting up to 6m0s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:13:28.036117  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:13:28.036141  673767 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:13:28.039267  673767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:13:28.050221  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:13:28.050241  673767 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:13:28.056552  673767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:13:28.066524  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:13:28.066541  673767 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:13:28.081521  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:13:28.081544  673767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:13:28.097923  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:13:28.097948  673767 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:13:28.113356  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:13:28.113379  673767 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:13:28.126139  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:13:28.126165  673767 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:13:28.137953  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:13:28.137972  673767 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:13:28.149743  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:13:28.149764  673767 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:13:28.161343  673767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:13:29.693106  673767 node_ready.go:49] node "embed-certs-284604" is "Ready"
	I1124 03:13:29.693153  673767 node_ready.go:38] duration metric: took 1.681471744s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:13:29.693171  673767 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:13:29.693236  673767 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:13:30.196725  673767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.157422343s)
	I1124 03:13:30.196810  673767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.140213299s)
	I1124 03:13:30.196921  673767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.035522661s)
	I1124 03:13:30.197051  673767 api_server.go:72] duration metric: took 2.339249161s to wait for apiserver process to appear ...
	I1124 03:13:30.197076  673767 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:13:30.197097  673767 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:13:30.198396  673767 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-284604 addons enable metrics-server
	
	I1124 03:13:30.202930  673767 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:13:30.202960  673767 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:13:30.207806  673767 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 03:13:30.208724  673767 addons.go:530] duration metric: took 2.350924584s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 03:13:30.698054  673767 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:13:30.702639  673767 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:13:30.702668  673767 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:13:31.197207  673767 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:13:31.201218  673767 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:13:31.202400  673767 api_server.go:141] control plane version: v1.34.1
	I1124 03:13:31.202424  673767 api_server.go:131] duration metric: took 1.005341455s to wait for apiserver health ...
	I1124 03:13:31.202435  673767 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:13:31.206017  673767 system_pods.go:59] 8 kube-system pods found
	I1124 03:13:31.206062  673767 system_pods.go:61] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:13:31.206074  673767 system_pods.go:61] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:13:31.206090  673767 system_pods.go:61] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:13:31.206103  673767 system_pods.go:61] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:13:31.206114  673767 system_pods.go:61] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:13:31.206120  673767 system_pods.go:61] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:13:31.206128  673767 system_pods.go:61] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:13:31.206140  673767 system_pods.go:61] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:13:31.206152  673767 system_pods.go:74] duration metric: took 3.71031ms to wait for pod list to return data ...
	I1124 03:13:31.206161  673767 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:13:31.208556  673767 default_sa.go:45] found service account: "default"
	I1124 03:13:31.208573  673767 default_sa.go:55] duration metric: took 2.405825ms for default service account to be created ...
	I1124 03:13:31.208580  673767 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:13:31.211125  673767 system_pods.go:86] 8 kube-system pods found
	I1124 03:13:31.211153  673767 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:13:31.211165  673767 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:13:31.211177  673767 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:13:31.211189  673767 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:13:31.211198  673767 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:13:31.211206  673767 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:13:31.211214  673767 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:13:31.211222  673767 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:13:31.211230  673767 system_pods.go:126] duration metric: took 2.64538ms to wait for k8s-apps to be running ...
	I1124 03:13:31.211241  673767 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:13:31.211291  673767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:13:31.223769  673767 system_svc.go:56] duration metric: took 12.524362ms WaitForService to wait for kubelet
	I1124 03:13:31.223792  673767 kubeadm.go:587] duration metric: took 3.365995248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:13:31.223809  673767 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:13:31.226069  673767 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:13:31.226088  673767 node_conditions.go:123] node cpu capacity is 8
	I1124 03:13:31.226103  673767 node_conditions.go:105] duration metric: took 2.288869ms to run NodePressure ...
	I1124 03:13:31.226113  673767 start.go:242] waiting for startup goroutines ...
	I1124 03:13:31.226120  673767 start.go:247] waiting for cluster config update ...
	I1124 03:13:31.226131  673767 start.go:256] writing updated cluster config ...
	I1124 03:13:31.226352  673767 ssh_runner.go:195] Run: rm -f paused
	I1124 03:13:31.229713  673767 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:13:31.232418  673767 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-89mzc" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:13:33.236490  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:35.238350  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:37.238821  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:39.737222  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:42.237496  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:44.738098  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:47.236763  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:49.237506  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:51.737027  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:53.737721  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:55.738650  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:58.237261  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:14:00.736950  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:14:02.737274  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:14:05.240010  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	I1124 03:14:07.236919  673767 pod_ready.go:94] pod "coredns-66bc5c9577-89mzc" is "Ready"
	I1124 03:14:07.236949  673767 pod_ready.go:86] duration metric: took 36.004512715s for pod "coredns-66bc5c9577-89mzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:07.239000  673767 pod_ready.go:83] waiting for pod "etcd-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:07.242401  673767 pod_ready.go:94] pod "etcd-embed-certs-284604" is "Ready"
	I1124 03:14:07.242434  673767 pod_ready.go:86] duration metric: took 3.412424ms for pod "etcd-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:07.244333  673767 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:07.247598  673767 pod_ready.go:94] pod "kube-apiserver-embed-certs-284604" is "Ready"
	I1124 03:14:07.247620  673767 pod_ready.go:86] duration metric: took 3.267496ms for pod "kube-apiserver-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:07.249363  673767 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:07.435324  673767 pod_ready.go:94] pod "kube-controller-manager-embed-certs-284604" is "Ready"
	I1124 03:14:07.435349  673767 pod_ready.go:86] duration metric: took 185.964379ms for pod "kube-controller-manager-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:07.635979  673767 pod_ready.go:83] waiting for pod "kube-proxy-bn8fd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:08.035382  673767 pod_ready.go:94] pod "kube-proxy-bn8fd" is "Ready"
	I1124 03:14:08.035410  673767 pod_ready.go:86] duration metric: took 399.407799ms for pod "kube-proxy-bn8fd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:08.236064  673767 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:08.635477  673767 pod_ready.go:94] pod "kube-scheduler-embed-certs-284604" is "Ready"
	I1124 03:14:08.635505  673767 pod_ready.go:86] duration metric: took 399.417659ms for pod "kube-scheduler-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:08.635517  673767 pod_ready.go:40] duration metric: took 37.405775482s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:14:08.676863  673767 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:14:08.678754  673767 out.go:179] * Done! kubectl is now configured to use "embed-certs-284604" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 03:13:41 embed-certs-284604 crio[579]: time="2025-11-24T03:13:41.04718773Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 03:13:41 embed-certs-284604 crio[579]: time="2025-11-24T03:13:41.320626947Z" level=info msg="Removing container: 51dc964de92e7ee4291fb6ce4b39ea18a39fed7ab04c8928c2a3a956bad71e32" id=7f94d0df-e69d-4284-9cf3-b11714424ff1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:13:41 embed-certs-284604 crio[579]: time="2025-11-24T03:13:41.329003555Z" level=info msg="Removed container 51dc964de92e7ee4291fb6ce4b39ea18a39fed7ab04c8928c2a3a956bad71e32: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87/dashboard-metrics-scraper" id=7f94d0df-e69d-4284-9cf3-b11714424ff1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.267869138Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8e010990-f57f-4c93-8e35-a409f67a9dce name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.268712511Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f713c98d-f2ac-41f4-a17d-662ff60cd72c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.269576224Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87/dashboard-metrics-scraper" id=a7209381-18d1-4039-aaac-7f1a5dbde526 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.269708129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.275593601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.276041298Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.306406953Z" level=info msg="Created container 8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87/dashboard-metrics-scraper" id=a7209381-18d1-4039-aaac-7f1a5dbde526 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.306810304Z" level=info msg="Starting container: 8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0" id=a3b816a0-417b-4f01-9e3c-02c0e7efa672 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.308513325Z" level=info msg="Started container" PID=1786 containerID=8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87/dashboard-metrics-scraper id=a3b816a0-417b-4f01-9e3c-02c0e7efa672 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c130d39491b124112a8af59d98589aeb6346eecbb7a22ed51df8ebda50a393d8
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.356370014Z" level=info msg="Removing container: cfd0841d7347980f87be5fbc1be5f112d4e719caa7acef25009d0b04b0c77100" id=13b6ccde-76ea-4f16-96be-3feab02626bf name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.364424158Z" level=info msg="Removed container cfd0841d7347980f87be5fbc1be5f112d4e719caa7acef25009d0b04b0c77100: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87/dashboard-metrics-scraper" id=13b6ccde-76ea-4f16-96be-3feab02626bf name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.367330859Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bb277b43-3ec4-48f0-acc7-5092d2188d5c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.368312909Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8cbfa3c5-db34-40ba-a97e-f1c0093eddf3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.369298456Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=97bfb23f-1213-4dc5-8c8e-40964e7f0c88 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.369432191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.374128556Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.374319186Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/09dc8e94e4b287ac9d0e2d9e7aaa3e4da235c835687a3e6646d22065c58d0bed/merged/etc/passwd: no such file or directory"
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.374344697Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/09dc8e94e4b287ac9d0e2d9e7aaa3e4da235c835687a3e6646d22065c58d0bed/merged/etc/group: no such file or directory"
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.37462913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.405059636Z" level=info msg="Created container 7dfe56080d905fccce6df6410252e5de80a8471a616d9694d5005304fc5a8607: kube-system/storage-provisioner/storage-provisioner" id=97bfb23f-1213-4dc5-8c8e-40964e7f0c88 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.405616545Z" level=info msg="Starting container: 7dfe56080d905fccce6df6410252e5de80a8471a616d9694d5005304fc5a8607" id=bff1b36d-89b7-42f1-bd0d-4b41086fcc07 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.407665786Z" level=info msg="Started container" PID=1800 containerID=7dfe56080d905fccce6df6410252e5de80a8471a616d9694d5005304fc5a8607 description=kube-system/storage-provisioner/storage-provisioner id=bff1b36d-89b7-42f1-bd0d-4b41086fcc07 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ccfb1d626ba5f0ed6b630c3b86e9df642891dd98afe66369e741858c8793b1e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7dfe56080d905       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   4ccfb1d626ba5       storage-provisioner                          kube-system
	8f8caf7133997       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   c130d39491b12       dashboard-metrics-scraper-6ffb444bf9-ftn87   kubernetes-dashboard
	c01f29a7b7ec8       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   907cd7bb38a4d       kubernetes-dashboard-855c9754f9-fbjrx        kubernetes-dashboard
	b8e427ddf2a88       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   6d4feca0b6558       busybox                                      default
	7c9de32e2a2a1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   183e25f5dcd97       coredns-66bc5c9577-89mzc                     kube-system
	d1fd18ad940c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   4ccfb1d626ba5       storage-provisioner                          kube-system
	7f8f00c980f03       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   5d30f27c49f2c       kindnet-7tbg8                                kube-system
	e2e368c8131a6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   301a118fce7ea       kube-proxy-bn8fd                             kube-system
	bee45aa12c3da       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           55 seconds ago      Running             kube-apiserver              0                   527b65254b7f2       kube-apiserver-embed-certs-284604            kube-system
	bbf0291761013       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           55 seconds ago      Running             kube-controller-manager     0                   b300b1fe3cf9b       kube-controller-manager-embed-certs-284604   kube-system
	d2ca966aa30cf       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           55 seconds ago      Running             kube-scheduler              0                   2a9b4be670473       kube-scheduler-embed-certs-284604            kube-system
	76f77f0607134       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           55 seconds ago      Running             etcd                        0                   aa11e4fa7f25c       etcd-embed-certs-284604                      kube-system
	
	
	==> coredns [7c9de32e2a2a1a4bf1dcb74cc732ea9bf8940a38a5255b6219921850e280f953] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58231 - 4256 "HINFO IN 2377575116500461309.1616807941125280626. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.107311116s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-284604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-284604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=embed-certs-284604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_12_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:12:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-284604
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:14:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:14:00 +0000   Mon, 24 Nov 2025 03:12:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:14:00 +0000   Mon, 24 Nov 2025 03:12:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:14:00 +0000   Mon, 24 Nov 2025 03:12:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:14:00 +0000   Mon, 24 Nov 2025 03:13:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-284604
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                069cc4ec-f604-4b4c-a3d4-6c93aa172617
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-89mzc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     102s
	  kube-system                 etcd-embed-certs-284604                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-7tbg8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-embed-certs-284604             250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-embed-certs-284604    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-bn8fd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-embed-certs-284604             100m (1%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ftn87    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fbjrx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 52s                kube-proxy       
	  Normal  Starting                 109s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  108s               kubelet          Node embed-certs-284604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    108s               kubelet          Node embed-certs-284604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s               kubelet          Node embed-certs-284604 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           104s               node-controller  Node embed-certs-284604 event: Registered Node embed-certs-284604 in Controller
	  Normal  NodeReady                92s                kubelet          Node embed-certs-284604 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node embed-certs-284604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node embed-certs-284604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node embed-certs-284604 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node embed-certs-284604 event: Registered Node embed-certs-284604 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [76f77f06071348df9904042948e1b1b6506e913800d6148de693cf689f54ff8b] <==
	{"level":"warn","ts":"2025-11-24T03:13:29.105262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.114131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.120359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.126391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.133834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.140582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.148509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.155205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.162827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.174975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.184907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.189763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.196453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.202032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.207824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.213436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.219240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.224872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.230689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.236778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.242640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.256492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.262217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.267909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.311954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48486","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:14:23 up  1:56,  0 user,  load average: 1.64, 3.38, 2.56
	Linux embed-certs-284604 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7f8f00c980f03b3a444b2377b90496745dbb07c9bd8f9baeb585c8435ae1c9dc] <==
	I1124 03:13:30.824297       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:13:30.824584       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 03:13:30.824759       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:13:30.824777       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:13:30.824803       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:13:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:13:31.024751       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:13:31.024846       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:13:31.024862       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:13:31.120508       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:13:31.420256       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:13:31.420277       1 metrics.go:72] Registering metrics
	I1124 03:13:31.420335       1 controller.go:711] "Syncing nftables rules"
	I1124 03:13:41.025584       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:13:41.025658       1 main.go:301] handling current node
	I1124 03:13:51.025356       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:13:51.025409       1 main.go:301] handling current node
	I1124 03:14:01.025373       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:14:01.025412       1 main.go:301] handling current node
	I1124 03:14:11.028229       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:14:11.028263       1 main.go:301] handling current node
	I1124 03:14:21.034325       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:14:21.034376       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bee45aa12c3da24d490c817cac60d2855a72aa70d2a66c610bbc0b141b008dbf] <==
	I1124 03:13:29.773459       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 03:13:29.773490       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:13:29.773447       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 03:13:29.773580       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 03:13:29.773756       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 03:13:29.774393       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 03:13:29.775507       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 03:13:29.775563       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:13:29.779866       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1124 03:13:29.780795       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 03:13:29.798433       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 03:13:29.813953       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 03:13:29.813973       1 policy_source.go:240] refreshing policies
	I1124 03:13:29.898926       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:13:30.024500       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:13:30.048409       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:13:30.062979       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:13:30.070119       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:13:30.075282       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:13:30.104286       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.121.191"}
	I1124 03:13:30.112842       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.49.227"}
	I1124 03:13:30.676788       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:13:33.211525       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:13:33.512795       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:13:33.611455       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bbf02917610133c48abd17535a3d2ae4b7bf5f001204872f0f6c240d1a35d582] <==
	I1124 03:13:33.062156       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:13:33.065335       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 03:13:33.065373       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:13:33.079618       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 03:13:33.109516       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:13:33.109535       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 03:13:33.109581       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 03:13:33.109614       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 03:13:33.109629       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 03:13:33.109700       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:13:33.109721       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 03:13:33.109734       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 03:13:33.109790       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:13:33.109863       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:13:33.112281       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 03:13:33.113414       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:13:33.115659       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:13:33.115744       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 03:13:33.116920       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:13:33.119171       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:13:33.123455       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:13:33.123467       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:13:33.123474       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:13:33.130383       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:13:33.134678       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e2e368c8131a6bdffbca9bf069eec5b0d46432a4f32e063227f4393352e1c12b] <==
	I1124 03:13:30.665177       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:13:30.724059       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:13:30.824415       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:13:30.824452       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 03:13:30.824594       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:13:30.844662       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:13:30.844710       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:13:30.850332       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:13:30.850773       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:13:30.850812       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:13:30.852333       1 config.go:200] "Starting service config controller"
	I1124 03:13:30.852357       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:13:30.852402       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:13:30.852435       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:13:30.852480       1 config.go:309] "Starting node config controller"
	I1124 03:13:30.852519       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:13:30.852526       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:13:30.852549       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:13:30.852574       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:13:30.952806       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:13:30.952825       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:13:30.952853       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d2ca966aa30cf5ae6493816c664588714b29eded4d0b36ff92e650b04101b9da] <==
	I1124 03:13:28.711756       1 serving.go:386] Generated self-signed cert in-memory
	W1124 03:13:29.710385       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 03:13:29.710424       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1124 03:13:29.710438       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 03:13:29.710448       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 03:13:29.733869       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 03:13:29.733941       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:13:29.736522       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:13:29.736565       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:13:29.736968       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 03:13:29.737089       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:13:29.837091       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:13:33 embed-certs-284604 kubelet[743]: I1124 03:13:33.825513     743 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmrbz\" (UniqueName: \"kubernetes.io/projected/3bee292f-a657-46ec-b8d0-f1389ede44cc-kube-api-access-jmrbz\") pod \"dashboard-metrics-scraper-6ffb444bf9-ftn87\" (UID: \"3bee292f-a657-46ec-b8d0-f1389ede44cc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87"
	Nov 24 03:13:33 embed-certs-284604 kubelet[743]: I1124 03:13:33.825535     743 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvlht\" (UniqueName: \"kubernetes.io/projected/f370dc97-efc4-4903-a62a-e6af42b5f4f9-kube-api-access-qvlht\") pod \"kubernetes-dashboard-855c9754f9-fbjrx\" (UID: \"f370dc97-efc4-4903-a62a-e6af42b5f4f9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbjrx"
	Nov 24 03:13:37 embed-certs-284604 kubelet[743]: I1124 03:13:37.167562     743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 03:13:38 embed-certs-284604 kubelet[743]: I1124 03:13:38.322043     743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbjrx" podStartSLOduration=1.9262234710000001 podStartE2EDuration="5.322022304s" podCreationTimestamp="2025-11-24 03:13:33 +0000 UTC" firstStartedPulling="2025-11-24 03:13:34.060821149 +0000 UTC m=+6.873562205" lastFinishedPulling="2025-11-24 03:13:37.456619968 +0000 UTC m=+10.269361038" observedRunningTime="2025-11-24 03:13:38.321572446 +0000 UTC m=+11.134313531" watchObservedRunningTime="2025-11-24 03:13:38.322022304 +0000 UTC m=+11.134763376"
	Nov 24 03:13:40 embed-certs-284604 kubelet[743]: I1124 03:13:40.315749     743 scope.go:117] "RemoveContainer" containerID="51dc964de92e7ee4291fb6ce4b39ea18a39fed7ab04c8928c2a3a956bad71e32"
	Nov 24 03:13:41 embed-certs-284604 kubelet[743]: I1124 03:13:41.319346     743 scope.go:117] "RemoveContainer" containerID="51dc964de92e7ee4291fb6ce4b39ea18a39fed7ab04c8928c2a3a956bad71e32"
	Nov 24 03:13:41 embed-certs-284604 kubelet[743]: I1124 03:13:41.319513     743 scope.go:117] "RemoveContainer" containerID="cfd0841d7347980f87be5fbc1be5f112d4e719caa7acef25009d0b04b0c77100"
	Nov 24 03:13:41 embed-certs-284604 kubelet[743]: E1124 03:13:41.319740     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ftn87_kubernetes-dashboard(3bee292f-a657-46ec-b8d0-f1389ede44cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87" podUID="3bee292f-a657-46ec-b8d0-f1389ede44cc"
	Nov 24 03:13:42 embed-certs-284604 kubelet[743]: I1124 03:13:42.322687     743 scope.go:117] "RemoveContainer" containerID="cfd0841d7347980f87be5fbc1be5f112d4e719caa7acef25009d0b04b0c77100"
	Nov 24 03:13:42 embed-certs-284604 kubelet[743]: E1124 03:13:42.322917     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ftn87_kubernetes-dashboard(3bee292f-a657-46ec-b8d0-f1389ede44cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87" podUID="3bee292f-a657-46ec-b8d0-f1389ede44cc"
	Nov 24 03:13:43 embed-certs-284604 kubelet[743]: I1124 03:13:43.347128     743 scope.go:117] "RemoveContainer" containerID="cfd0841d7347980f87be5fbc1be5f112d4e719caa7acef25009d0b04b0c77100"
	Nov 24 03:13:43 embed-certs-284604 kubelet[743]: E1124 03:13:43.347272     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ftn87_kubernetes-dashboard(3bee292f-a657-46ec-b8d0-f1389ede44cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87" podUID="3bee292f-a657-46ec-b8d0-f1389ede44cc"
	Nov 24 03:13:57 embed-certs-284604 kubelet[743]: I1124 03:13:57.267486     743 scope.go:117] "RemoveContainer" containerID="cfd0841d7347980f87be5fbc1be5f112d4e719caa7acef25009d0b04b0c77100"
	Nov 24 03:13:57 embed-certs-284604 kubelet[743]: I1124 03:13:57.355227     743 scope.go:117] "RemoveContainer" containerID="cfd0841d7347980f87be5fbc1be5f112d4e719caa7acef25009d0b04b0c77100"
	Nov 24 03:13:57 embed-certs-284604 kubelet[743]: I1124 03:13:57.355392     743 scope.go:117] "RemoveContainer" containerID="8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0"
	Nov 24 03:13:57 embed-certs-284604 kubelet[743]: E1124 03:13:57.355560     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ftn87_kubernetes-dashboard(3bee292f-a657-46ec-b8d0-f1389ede44cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87" podUID="3bee292f-a657-46ec-b8d0-f1389ede44cc"
	Nov 24 03:14:01 embed-certs-284604 kubelet[743]: I1124 03:14:01.366913     743 scope.go:117] "RemoveContainer" containerID="d1fd18ad940c962cf45cd1bcc24444e576f59c99eaf790532d0fef509627de0c"
	Nov 24 03:14:03 embed-certs-284604 kubelet[743]: I1124 03:14:03.347845     743 scope.go:117] "RemoveContainer" containerID="8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0"
	Nov 24 03:14:03 embed-certs-284604 kubelet[743]: E1124 03:14:03.348088     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ftn87_kubernetes-dashboard(3bee292f-a657-46ec-b8d0-f1389ede44cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87" podUID="3bee292f-a657-46ec-b8d0-f1389ede44cc"
	Nov 24 03:14:14 embed-certs-284604 kubelet[743]: I1124 03:14:14.267905     743 scope.go:117] "RemoveContainer" containerID="8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0"
	Nov 24 03:14:14 embed-certs-284604 kubelet[743]: E1124 03:14:14.268081     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ftn87_kubernetes-dashboard(3bee292f-a657-46ec-b8d0-f1389ede44cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87" podUID="3bee292f-a657-46ec-b8d0-f1389ede44cc"
	Nov 24 03:14:20 embed-certs-284604 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 03:14:20 embed-certs-284604 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 03:14:20 embed-certs-284604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 03:14:20 embed-certs-284604 systemd[1]: kubelet.service: Consumed 1.427s CPU time.
	
	
	==> kubernetes-dashboard [c01f29a7b7ec8871b22514ceac1950d3ab216fa11d1e3c795917584a750a2e70] <==
	2025/11/24 03:13:37 Using namespace: kubernetes-dashboard
	2025/11/24 03:13:37 Using in-cluster config to connect to apiserver
	2025/11/24 03:13:37 Using secret token for csrf signing
	2025/11/24 03:13:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 03:13:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 03:13:37 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 03:13:37 Generating JWE encryption key
	2025/11/24 03:13:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 03:13:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 03:13:37 Initializing JWE encryption key from synchronized object
	2025/11/24 03:13:37 Creating in-cluster Sidecar client
	2025/11/24 03:13:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 03:13:37 Serving insecurely on HTTP port: 9090
	2025/11/24 03:14:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 03:13:37 Starting overwatch
	
	
	==> storage-provisioner [7dfe56080d905fccce6df6410252e5de80a8471a616d9694d5005304fc5a8607] <==
	I1124 03:14:01.419597       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:14:01.426576       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:14:01.426610       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:14:01.428141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:04.883091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:09.143470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:12.741844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:15.795699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:18.817404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:18.821331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:14:18.821459       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:14:18.821604       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49bb5acd-171e-4aa8-8356-6bac5deb0205", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-284604_3125ec58-ae72-4152-85f4-98843fc71951 became leader
	I1124 03:14:18.821640       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-284604_3125ec58-ae72-4152-85f4-98843fc71951!
	W1124 03:14:18.823268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:18.826518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:14:18.921853       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-284604_3125ec58-ae72-4152-85f4-98843fc71951!
	W1124 03:14:20.829439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:20.832825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:22.836753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:22.840536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d1fd18ad940c962cf45cd1bcc24444e576f59c99eaf790532d0fef509627de0c] <==
	I1124 03:13:30.639428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 03:14:00.641455       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-284604 -n embed-certs-284604
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-284604 -n embed-certs-284604: exit status 2 (315.381125ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-284604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-284604
helpers_test.go:243: (dbg) docker inspect embed-certs-284604:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa",
	        "Created": "2025-11-24T03:12:13.144496823Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 673971,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:13:21.236174017Z",
	            "FinishedAt": "2025-11-24T03:13:20.43779445Z"
	        },
	        "Image": "sha256:cfc5db6e94549413134f251b33e15399a9f8a376c7daf23bfd6c853469fc1524",
	        "ResolvConfPath": "/var/lib/docker/containers/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa/hostname",
	        "HostsPath": "/var/lib/docker/containers/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa/hosts",
	        "LogPath": "/var/lib/docker/containers/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa/65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa-json.log",
	        "Name": "/embed-certs-284604",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-284604:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-284604",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "65dda7ef92bdfb81b90abb1b766fb24588a6ab64d4c93e49d87c9b25eea5d8fa",
	                "LowerDir": "/var/lib/docker/overlay2/af574c392758f969970a6326011c4ed57be640b8c05a9d111cafa150f3cb303b-init/diff:/var/lib/docker/overlay2/8937cb394b0788c6be8b82699bf7c7f3003f8ef3467cc601eeec6b457c84ee1f/diff",
	                "MergedDir": "/var/lib/docker/overlay2/af574c392758f969970a6326011c4ed57be640b8c05a9d111cafa150f3cb303b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/af574c392758f969970a6326011c4ed57be640b8c05a9d111cafa150f3cb303b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/af574c392758f969970a6326011c4ed57be640b8c05a9d111cafa150f3cb303b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-284604",
	                "Source": "/var/lib/docker/volumes/embed-certs-284604/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-284604",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-284604",
	                "name.minikube.sigs.k8s.io": "embed-certs-284604",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6d853c450794ba7800c4bfd667592ac27604a35945845afdcb76eae0f6b44d03",
	            "SandboxKey": "/var/run/docker/netns/6d853c450794",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-284604": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "1d9fd759284ca1283df730e0f7d581869748db9e3cd1619451e948defda88535",
	                    "EndpointID": "a4a40d4b817ffe8f5454c7ea15cda79603736ee2cbd74c8a04e9d5f8c45249af",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "0a:53:04:2a:d6:89",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-284604",
	                        "65dda7ef92bd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-284604 -n embed-certs-284604
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-284604 -n embed-certs-284604: exit status 2 (312.90623ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-284604 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-284604 logs -n 25: (1.006708461s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable dashboard -p no-preload-603010 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                             │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                     │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p no-preload-603010 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                  │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p newest-cni-438041                                                                                                                                                     │ newest-cni-438041            │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p disable-driver-mounts-242597                                                                                                                                          │ disable-driver-mounts-242597 │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ start   │ -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ image   │ old-k8s-version-579951 image list --format=json                                                                                                                          │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ pause   │ -p old-k8s-version-579951 --alsologtostderr -v=1                                                                                                                         │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │                     │
	│ delete  │ -p old-k8s-version-579951                                                                                                                                                │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ delete  │ -p old-k8s-version-579951                                                                                                                                                │ old-k8s-version-579951       │ jenkins │ v1.37.0 │ 24 Nov 25 03:12 UTC │ 24 Nov 25 03:12 UTC │
	│ addons  │ enable metrics-server -p embed-certs-284604 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ stop    │ -p embed-certs-284604 --alsologtostderr -v=3                                                                                                                             │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ image   │ default-k8s-diff-port-993813 image list --format=json                                                                                                                    │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ pause   │ -p default-k8s-diff-port-993813 --alsologtostderr -v=1                                                                                                                   │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ image   │ no-preload-603010 image list --format=json                                                                                                                               │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ pause   │ -p no-preload-603010 --alsologtostderr -v=1                                                                                                                              │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-993813                                                                                                                                          │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ delete  │ -p default-k8s-diff-port-993813                                                                                                                                          │ default-k8s-diff-port-993813 │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ delete  │ -p no-preload-603010                                                                                                                                                     │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ delete  │ -p no-preload-603010                                                                                                                                                     │ no-preload-603010            │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ addons  │ enable dashboard -p embed-certs-284604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                            │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:13 UTC │
	│ start   │ -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:13 UTC │ 24 Nov 25 03:14 UTC │
	│ image   │ embed-certs-284604 image list --format=json                                                                                                                              │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │ 24 Nov 25 03:14 UTC │
	│ pause   │ -p embed-certs-284604 --alsologtostderr -v=1                                                                                                                             │ embed-certs-284604           │ jenkins │ v1.37.0 │ 24 Nov 25 03:14 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:13:21
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:13:21.023697  673767 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:13:21.023794  673767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:21.023805  673767 out.go:374] Setting ErrFile to fd 2...
	I1124 03:13:21.023810  673767 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:13:21.024024  673767 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:13:21.024485  673767 out.go:368] Setting JSON to false
	I1124 03:13:21.025474  673767 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6948,"bootTime":1763947053,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:13:21.025524  673767 start.go:143] virtualization: kvm guest
	I1124 03:13:21.027092  673767 out.go:179] * [embed-certs-284604] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:13:21.028207  673767 notify.go:221] Checking for updates...
	I1124 03:13:21.028214  673767 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:13:21.029309  673767 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:13:21.030380  673767 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:13:21.031472  673767 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:13:21.032575  673767 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:13:21.033601  673767 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:13:21.034917  673767 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:13:21.035435  673767 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:13:21.058427  673767 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:13:21.058540  673767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:21.112477  673767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-24 03:13:21.102640806 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:13:21.112584  673767 docker.go:319] overlay module found
	I1124 03:13:21.113967  673767 out.go:179] * Using the docker driver based on existing profile
	I1124 03:13:21.115143  673767 start.go:309] selected driver: docker
	I1124 03:13:21.115157  673767 start.go:927] validating driver "docker" against &{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:21.115246  673767 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:13:21.115798  673767 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:13:21.171790  673767 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-11-24 03:13:21.163161885 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:13:21.172086  673767 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:13:21.172125  673767 cni.go:84] Creating CNI manager for ""
	I1124 03:13:21.172199  673767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:13:21.172243  673767 start.go:353] cluster config:
	{Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:21.173815  673767 out.go:179] * Starting "embed-certs-284604" primary control-plane node in "embed-certs-284604" cluster
	I1124 03:13:21.174821  673767 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 03:13:21.175933  673767 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:13:21.176985  673767 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:13:21.177015  673767 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 03:13:21.177026  673767 cache.go:65] Caching tarball of preloaded images
	I1124 03:13:21.177061  673767 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:13:21.177120  673767 preload.go:238] Found /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 03:13:21.177135  673767 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 03:13:21.177243  673767 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:13:21.196030  673767 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:13:21.196045  673767 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:13:21.196059  673767 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:13:21.196093  673767 start.go:360] acquireMachinesLock for embed-certs-284604: {Name:mkd39be5908e1d289ed5af40b6c2b1c510beffd8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:13:21.196158  673767 start.go:364] duration metric: took 36.894µs to acquireMachinesLock for "embed-certs-284604"
	I1124 03:13:21.196174  673767 start.go:96] Skipping create...Using existing machine configuration
	I1124 03:13:21.196181  673767 fix.go:54] fixHost starting: 
	I1124 03:13:21.196382  673767 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:13:21.212338  673767 fix.go:112] recreateIfNeeded on embed-certs-284604: state=Stopped err=<nil>
	W1124 03:13:21.212359  673767 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 03:13:21.213903  673767 out.go:252] * Restarting existing docker container for "embed-certs-284604" ...
	I1124 03:13:21.214003  673767 cli_runner.go:164] Run: docker start embed-certs-284604
	I1124 03:13:21.458544  673767 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:13:21.477043  673767 kic.go:430] container "embed-certs-284604" state is running.
	I1124 03:13:21.477399  673767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:13:21.494001  673767 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/config.json ...
	I1124 03:13:21.494192  673767 machine.go:94] provisionDockerMachine start ...
	I1124 03:13:21.494272  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:21.511591  673767 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:21.511856  673767 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I1124 03:13:21.511873  673767 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:13:21.512614  673767 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49324->127.0.0.1:33503: read: connection reset by peer
	I1124 03:13:24.649100  673767 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:13:24.649139  673767 ubuntu.go:182] provisioning hostname "embed-certs-284604"
	I1124 03:13:24.649210  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:24.667549  673767 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:24.667764  673767 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I1124 03:13:24.667777  673767 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-284604 && echo "embed-certs-284604" | sudo tee /etc/hostname
	I1124 03:13:24.809983  673767 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-284604
	
	I1124 03:13:24.810072  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:24.827292  673767 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:24.827553  673767 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I1124 03:13:24.827580  673767 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-284604' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-284604/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-284604' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:13:24.961631  673767 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:13:24.961659  673767 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-345525/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-345525/.minikube}
	I1124 03:13:24.961707  673767 ubuntu.go:190] setting up certificates
	I1124 03:13:24.961729  673767 provision.go:84] configureAuth start
	I1124 03:13:24.961781  673767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:13:24.978993  673767 provision.go:143] copyHostCerts
	I1124 03:13:24.979052  673767 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem, removing ...
	I1124 03:13:24.979069  673767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem
	I1124 03:13:24.979133  673767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/ca.pem (1082 bytes)
	I1124 03:13:24.979228  673767 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem, removing ...
	I1124 03:13:24.979243  673767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem
	I1124 03:13:24.979270  673767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/cert.pem (1123 bytes)
	I1124 03:13:24.979338  673767 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem, removing ...
	I1124 03:13:24.979346  673767 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem
	I1124 03:13:24.979370  673767 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-345525/.minikube/key.pem (1679 bytes)
	I1124 03:13:24.979434  673767 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem org=jenkins.embed-certs-284604 san=[127.0.0.1 192.168.94.2 embed-certs-284604 localhost minikube]
	I1124 03:13:25.132758  673767 provision.go:177] copyRemoteCerts
	I1124 03:13:25.132818  673767 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:13:25.132866  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:25.149705  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:25.247440  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 03:13:25.264014  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:13:25.279874  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1124 03:13:25.295812  673767 provision.go:87] duration metric: took 334.070361ms to configureAuth
	I1124 03:13:25.295837  673767 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:13:25.296027  673767 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:13:25.296134  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:25.312742  673767 main.go:143] libmachine: Using SSH client type: native
	I1124 03:13:25.312959  673767 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33503 <nil> <nil>}
	I1124 03:13:25.312983  673767 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 03:13:25.617219  673767 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 03:13:25.617246  673767 machine.go:97] duration metric: took 4.123039203s to provisionDockerMachine
	I1124 03:13:25.617260  673767 start.go:293] postStartSetup for "embed-certs-284604" (driver="docker")
	I1124 03:13:25.617275  673767 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:13:25.617343  673767 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:13:25.617386  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:25.637174  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:25.734461  673767 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:13:25.737722  673767 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:13:25.737743  673767 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:13:25.737752  673767 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/addons for local assets ...
	I1124 03:13:25.737801  673767 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-345525/.minikube/files for local assets ...
	I1124 03:13:25.737878  673767 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem -> 3490782.pem in /etc/ssl/certs
	I1124 03:13:25.737994  673767 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:13:25.745077  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:13:25.761050  673767 start.go:296] duration metric: took 143.775494ms for postStartSetup
	I1124 03:13:25.761108  673767 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:13:25.761139  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:25.778294  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:25.872298  673767 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:13:25.876609  673767 fix.go:56] duration metric: took 4.680421261s for fixHost
	I1124 03:13:25.876634  673767 start.go:83] releasing machines lock for "embed-certs-284604", held for 4.680464995s
	I1124 03:13:25.876703  673767 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-284604
	I1124 03:13:25.893223  673767 ssh_runner.go:195] Run: cat /version.json
	I1124 03:13:25.893295  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:25.893354  673767 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:13:25.893434  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:25.910903  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:25.911287  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:26.005512  673767 ssh_runner.go:195] Run: systemctl --version
	I1124 03:13:26.078619  673767 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 03:13:26.111985  673767 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:13:26.116370  673767 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:13:26.116427  673767 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:13:26.123998  673767 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 03:13:26.124020  673767 start.go:496] detecting cgroup driver to use...
	I1124 03:13:26.124049  673767 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 03:13:26.124100  673767 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 03:13:26.137366  673767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 03:13:26.148272  673767 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:13:26.148328  673767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:13:26.161121  673767 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:13:26.172030  673767 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:13:26.247968  673767 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:13:26.325591  673767 docker.go:234] disabling docker service ...
	I1124 03:13:26.325642  673767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:13:26.338520  673767 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:13:26.349463  673767 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:13:26.426193  673767 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:13:26.501999  673767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:13:26.512931  673767 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:13:26.525655  673767 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 03:13:26.525703  673767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:13:26.533624  673767 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 03:13:26.533666  673767 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:13:26.541730  673767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:13:26.549536  673767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:13:26.557411  673767 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:13:26.564562  673767 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:13:26.572348  673767 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:13:26.579652  673767 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 03:13:26.587376  673767 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:13:26.593913  673767 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:13:26.600541  673767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:26.674417  673767 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 03:13:26.805721  673767 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 03:13:26.805778  673767 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 03:13:26.809498  673767 start.go:564] Will wait 60s for crictl version
	I1124 03:13:26.809545  673767 ssh_runner.go:195] Run: which crictl
	I1124 03:13:26.812755  673767 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:13:26.835988  673767 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 03:13:26.836067  673767 ssh_runner.go:195] Run: crio --version
	I1124 03:13:26.863207  673767 ssh_runner.go:195] Run: crio --version
	I1124 03:13:26.890909  673767 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 03:13:26.892032  673767 cli_runner.go:164] Run: docker network inspect embed-certs-284604 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:13:26.909903  673767 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 03:13:26.913595  673767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:13:26.923532  673767 kubeadm.go:884] updating cluster {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:13:26.923632  673767 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 03:13:26.923674  673767 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:13:26.953820  673767 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:13:26.953838  673767 crio.go:433] Images already preloaded, skipping extraction
	I1124 03:13:26.953879  673767 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:13:26.976967  673767 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 03:13:26.976985  673767 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:13:26.976993  673767 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 03:13:26.977087  673767 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-284604 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:13:26.977140  673767 ssh_runner.go:195] Run: crio config
	I1124 03:13:27.020706  673767 cni.go:84] Creating CNI manager for ""
	I1124 03:13:27.020724  673767 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 03:13:27.020740  673767 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:13:27.020760  673767 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-284604 NodeName:embed-certs-284604 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:13:27.020880  673767 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-284604"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:13:27.020953  673767 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:13:27.028655  673767 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:13:27.028709  673767 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:13:27.035781  673767 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 03:13:27.047656  673767 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:13:27.058850  673767 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 03:13:27.070310  673767 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:13:27.073499  673767 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:13:27.082423  673767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:27.160803  673767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:13:27.184692  673767 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604 for IP: 192.168.94.2
	I1124 03:13:27.184711  673767 certs.go:195] generating shared ca certs ...
	I1124 03:13:27.184732  673767 certs.go:227] acquiring lock for ca certs: {Name:mk4aeb304544b28f16a8eaec9ba420aa35e90cd9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:27.184905  673767 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key
	I1124 03:13:27.184986  673767 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key
	I1124 03:13:27.185004  673767 certs.go:257] generating profile certs ...
	I1124 03:13:27.185145  673767 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/client.key
	I1124 03:13:27.185238  673767 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key.9041d087
	I1124 03:13:27.185290  673767 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key
	I1124 03:13:27.185387  673767 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem (1338 bytes)
	W1124 03:13:27.185417  673767 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078_empty.pem, impossibly tiny 0 bytes
	I1124 03:13:27.185430  673767 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:13:27.185456  673767 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/ca.pem (1082 bytes)
	I1124 03:13:27.185481  673767 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:13:27.185502  673767 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/certs/key.pem (1679 bytes)
	I1124 03:13:27.185543  673767 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem (1708 bytes)
	I1124 03:13:27.186095  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:13:27.203654  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:13:27.220660  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:13:27.238775  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:13:27.263708  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:13:27.280354  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:13:27.295972  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:13:27.311550  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/embed-certs-284604/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:13:27.327064  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/certs/349078.pem --> /usr/share/ca-certificates/349078.pem (1338 bytes)
	I1124 03:13:27.342758  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/ssl/certs/3490782.pem --> /usr/share/ca-certificates/3490782.pem (1708 bytes)
	I1124 03:13:27.358487  673767 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:13:27.375141  673767 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:13:27.386596  673767 ssh_runner.go:195] Run: openssl version
	I1124 03:13:27.392118  673767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/349078.pem && ln -fs /usr/share/ca-certificates/349078.pem /etc/ssl/certs/349078.pem"
	I1124 03:13:27.399836  673767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/349078.pem
	I1124 03:13:27.403114  673767 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:30 /usr/share/ca-certificates/349078.pem
	I1124 03:13:27.403159  673767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/349078.pem
	I1124 03:13:27.436458  673767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/349078.pem /etc/ssl/certs/51391683.0"
	I1124 03:13:27.443327  673767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3490782.pem && ln -fs /usr/share/ca-certificates/3490782.pem /etc/ssl/certs/3490782.pem"
	I1124 03:13:27.451223  673767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3490782.pem
	I1124 03:13:27.454519  673767 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:30 /usr/share/ca-certificates/3490782.pem
	I1124 03:13:27.454562  673767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3490782.pem
	I1124 03:13:27.487685  673767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3490782.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:13:27.494693  673767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:13:27.502120  673767 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:27.505417  673767 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:24 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:27.505459  673767 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:13:27.538267  673767 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:13:27.545261  673767 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:13:27.548661  673767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 03:13:27.582051  673767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 03:13:27.614279  673767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 03:13:27.646902  673767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 03:13:27.682043  673767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 03:13:27.726407  673767 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 03:13:27.779794  673767 kubeadm.go:401] StartCluster: {Name:embed-certs-284604 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-284604 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:13:27.779907  673767 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 03:13:27.779990  673767 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:13:27.816976  673767 cri.go:89] found id: "bee45aa12c3da24d490c817cac60d2855a72aa70d2a66c610bbc0b141b008dbf"
	I1124 03:13:27.817008  673767 cri.go:89] found id: "bbf02917610133c48abd17535a3d2ae4b7bf5f001204872f0f6c240d1a35d582"
	I1124 03:13:27.817014  673767 cri.go:89] found id: "d2ca966aa30cf5ae6493816c664588714b29eded4d0b36ff92e650b04101b9da"
	I1124 03:13:27.817028  673767 cri.go:89] found id: "76f77f06071348df9904042948e1b1b6506e913800d6148de693cf689f54ff8b"
	I1124 03:13:27.817036  673767 cri.go:89] found id: ""
	I1124 03:13:27.817087  673767 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 03:13:27.831032  673767 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T03:13:27Z" level=error msg="open /run/runc: no such file or directory"
	I1124 03:13:27.831100  673767 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:13:27.839437  673767 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 03:13:27.839455  673767 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 03:13:27.839507  673767 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 03:13:27.846602  673767 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 03:13:27.847027  673767 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-284604" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:13:27.847136  673767 kubeconfig.go:62] /home/jenkins/minikube-integration/21975-345525/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-284604" cluster setting kubeconfig missing "embed-certs-284604" context setting]
	I1124 03:13:27.847407  673767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:27.848629  673767 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 03:13:27.856238  673767 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1124 03:13:27.856279  673767 kubeadm.go:602] duration metric: took 16.81619ms to restartPrimaryControlPlane
	I1124 03:13:27.856289  673767 kubeadm.go:403] duration metric: took 76.504087ms to StartCluster
	I1124 03:13:27.856305  673767 settings.go:142] acquiring lock: {Name:mkdfe6d35d759513d6a2d0c337461c210b0a4272 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:27.856405  673767 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:13:27.857521  673767 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-345525/kubeconfig: {Name:mk1e847f86b234027ecb3187039983f94a9c65e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:13:27.857762  673767 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 03:13:27.857806  673767 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:13:27.857876  673767 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-284604"
	I1124 03:13:27.857900  673767 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-284604"
	W1124 03:13:27.857910  673767 addons.go:248] addon storage-provisioner should already be in state true
	I1124 03:13:27.857940  673767 addons.go:70] Setting dashboard=true in profile "embed-certs-284604"
	I1124 03:13:27.857952  673767 addons.go:70] Setting default-storageclass=true in profile "embed-certs-284604"
	I1124 03:13:27.857958  673767 addons.go:239] Setting addon dashboard=true in "embed-certs-284604"
	W1124 03:13:27.857966  673767 addons.go:248] addon dashboard should already be in state true
	I1124 03:13:27.857971  673767 config.go:182] Loaded profile config "embed-certs-284604": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:13:27.857980  673767 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-284604"
	I1124 03:13:27.857992  673767 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:13:27.857945  673767 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:13:27.858227  673767 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:13:27.858476  673767 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:13:27.858548  673767 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:13:27.860704  673767 out.go:179] * Verifying Kubernetes components...
	I1124 03:13:27.861812  673767 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:13:27.883991  673767 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 03:13:27.884065  673767 addons.go:239] Setting addon default-storageclass=true in "embed-certs-284604"
	W1124 03:13:27.884086  673767 addons.go:248] addon default-storageclass should already be in state true
	I1124 03:13:27.884117  673767 host.go:66] Checking if "embed-certs-284604" exists ...
	I1124 03:13:27.884000  673767 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:13:27.884620  673767 cli_runner.go:164] Run: docker container inspect embed-certs-284604 --format={{.State.Status}}
	I1124 03:13:27.885405  673767 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:13:27.885426  673767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:13:27.885480  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:27.886354  673767 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 03:13:27.887255  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 03:13:27.887268  673767 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 03:13:27.887323  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:27.921423  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:27.921731  673767 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:13:27.921752  673767 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:13:27.921807  673767 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-284604
	I1124 03:13:27.923788  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:27.947653  673767 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33503 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/embed-certs-284604/id_rsa Username:docker}
	I1124 03:13:27.999321  673767 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:13:28.011628  673767 node_ready.go:35] waiting up to 6m0s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:13:28.036117  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 03:13:28.036141  673767 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 03:13:28.039267  673767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:13:28.050221  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 03:13:28.050241  673767 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 03:13:28.056552  673767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:13:28.066524  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 03:13:28.066541  673767 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 03:13:28.081521  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 03:13:28.081544  673767 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 03:13:28.097923  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 03:13:28.097948  673767 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 03:13:28.113356  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 03:13:28.113379  673767 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 03:13:28.126139  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 03:13:28.126165  673767 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 03:13:28.137953  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 03:13:28.137972  673767 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 03:13:28.149743  673767 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:13:28.149764  673767 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 03:13:28.161343  673767 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 03:13:29.693106  673767 node_ready.go:49] node "embed-certs-284604" is "Ready"
	I1124 03:13:29.693153  673767 node_ready.go:38] duration metric: took 1.681471744s for node "embed-certs-284604" to be "Ready" ...
	I1124 03:13:29.693171  673767 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:13:29.693236  673767 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:13:30.196725  673767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.157422343s)
	I1124 03:13:30.196810  673767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.140213299s)
	I1124 03:13:30.196921  673767 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.035522661s)
	I1124 03:13:30.197051  673767 api_server.go:72] duration metric: took 2.339249161s to wait for apiserver process to appear ...
	I1124 03:13:30.197076  673767 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:13:30.197097  673767 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:13:30.198396  673767 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-284604 addons enable metrics-server
	
	I1124 03:13:30.202930  673767 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:13:30.202960  673767 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:13:30.207806  673767 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 03:13:30.208724  673767 addons.go:530] duration metric: took 2.350924584s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 03:13:30.698054  673767 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:13:30.702639  673767 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 03:13:30.702668  673767 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 03:13:31.197207  673767 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 03:13:31.201218  673767 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 03:13:31.202400  673767 api_server.go:141] control plane version: v1.34.1
	I1124 03:13:31.202424  673767 api_server.go:131] duration metric: took 1.005341455s to wait for apiserver health ...
	I1124 03:13:31.202435  673767 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:13:31.206017  673767 system_pods.go:59] 8 kube-system pods found
	I1124 03:13:31.206062  673767 system_pods.go:61] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:13:31.206074  673767 system_pods.go:61] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:13:31.206090  673767 system_pods.go:61] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:13:31.206103  673767 system_pods.go:61] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:13:31.206114  673767 system_pods.go:61] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:13:31.206120  673767 system_pods.go:61] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:13:31.206128  673767 system_pods.go:61] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:13:31.206140  673767 system_pods.go:61] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:13:31.206152  673767 system_pods.go:74] duration metric: took 3.71031ms to wait for pod list to return data ...
	I1124 03:13:31.206161  673767 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:13:31.208556  673767 default_sa.go:45] found service account: "default"
	I1124 03:13:31.208573  673767 default_sa.go:55] duration metric: took 2.405825ms for default service account to be created ...
	I1124 03:13:31.208580  673767 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:13:31.211125  673767 system_pods.go:86] 8 kube-system pods found
	I1124 03:13:31.211153  673767 system_pods.go:89] "coredns-66bc5c9577-89mzc" [7dff9f08-8110-4d3a-8505-4e3551179ae8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:13:31.211165  673767 system_pods.go:89] "etcd-embed-certs-284604" [9bb7ee29-d6cc-4b59-b921-cda0819c7c5c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 03:13:31.211177  673767 system_pods.go:89] "kindnet-7tbg8" [903047e3-558b-41ce-a93d-9ed12844b7d3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 03:13:31.211189  673767 system_pods.go:89] "kube-apiserver-embed-certs-284604" [bf101ab3-d62c-41ea-9924-2fbddfdf1336] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 03:13:31.211198  673767 system_pods.go:89] "kube-controller-manager-embed-certs-284604" [b6743273-f4b4-4354-9757-74598d2473ac] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 03:13:31.211206  673767 system_pods.go:89] "kube-proxy-bn8fd" [163b51f7-e8f5-47e0-9ea1-ca6d037db165] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 03:13:31.211214  673767 system_pods.go:89] "kube-scheduler-embed-certs-284604" [4c00e11f-f3e6-488f-85a8-29270334c56f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 03:13:31.211222  673767 system_pods.go:89] "storage-provisioner" [b51f7fd3-f53d-4099-9711-9fe1985b9868] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:13:31.211230  673767 system_pods.go:126] duration metric: took 2.64538ms to wait for k8s-apps to be running ...
	I1124 03:13:31.211241  673767 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:13:31.211291  673767 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:13:31.223769  673767 system_svc.go:56] duration metric: took 12.524362ms WaitForService to wait for kubelet
	I1124 03:13:31.223792  673767 kubeadm.go:587] duration metric: took 3.365995248s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:13:31.223809  673767 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:13:31.226069  673767 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 03:13:31.226088  673767 node_conditions.go:123] node cpu capacity is 8
	I1124 03:13:31.226103  673767 node_conditions.go:105] duration metric: took 2.288869ms to run NodePressure ...
	I1124 03:13:31.226113  673767 start.go:242] waiting for startup goroutines ...
	I1124 03:13:31.226120  673767 start.go:247] waiting for cluster config update ...
	I1124 03:13:31.226131  673767 start.go:256] writing updated cluster config ...
	I1124 03:13:31.226352  673767 ssh_runner.go:195] Run: rm -f paused
	I1124 03:13:31.229713  673767 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:13:31.232418  673767 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-89mzc" in "kube-system" namespace to be "Ready" or be gone ...
	W1124 03:13:33.236490  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:35.238350  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:37.238821  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:39.737222  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:42.237496  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:44.738098  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:47.236763  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:49.237506  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:51.737027  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:53.737721  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:55.738650  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:13:58.237261  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:14:00.736950  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:14:02.737274  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	W1124 03:14:05.240010  673767 pod_ready.go:104] pod "coredns-66bc5c9577-89mzc" is not "Ready", error: <nil>
	I1124 03:14:07.236919  673767 pod_ready.go:94] pod "coredns-66bc5c9577-89mzc" is "Ready"
	I1124 03:14:07.236949  673767 pod_ready.go:86] duration metric: took 36.004512715s for pod "coredns-66bc5c9577-89mzc" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:07.239000  673767 pod_ready.go:83] waiting for pod "etcd-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:07.242401  673767 pod_ready.go:94] pod "etcd-embed-certs-284604" is "Ready"
	I1124 03:14:07.242434  673767 pod_ready.go:86] duration metric: took 3.412424ms for pod "etcd-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:07.244333  673767 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:07.247598  673767 pod_ready.go:94] pod "kube-apiserver-embed-certs-284604" is "Ready"
	I1124 03:14:07.247620  673767 pod_ready.go:86] duration metric: took 3.267496ms for pod "kube-apiserver-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:07.249363  673767 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:07.435324  673767 pod_ready.go:94] pod "kube-controller-manager-embed-certs-284604" is "Ready"
	I1124 03:14:07.435349  673767 pod_ready.go:86] duration metric: took 185.964379ms for pod "kube-controller-manager-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:07.635979  673767 pod_ready.go:83] waiting for pod "kube-proxy-bn8fd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:08.035382  673767 pod_ready.go:94] pod "kube-proxy-bn8fd" is "Ready"
	I1124 03:14:08.035410  673767 pod_ready.go:86] duration metric: took 399.407799ms for pod "kube-proxy-bn8fd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:08.236064  673767 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:08.635477  673767 pod_ready.go:94] pod "kube-scheduler-embed-certs-284604" is "Ready"
	I1124 03:14:08.635505  673767 pod_ready.go:86] duration metric: took 399.417659ms for pod "kube-scheduler-embed-certs-284604" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:14:08.635517  673767 pod_ready.go:40] duration metric: took 37.405775482s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:14:08.676863  673767 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 03:14:08.678754  673767 out.go:179] * Done! kubectl is now configured to use "embed-certs-284604" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 03:13:41 embed-certs-284604 crio[579]: time="2025-11-24T03:13:41.04718773Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 03:13:41 embed-certs-284604 crio[579]: time="2025-11-24T03:13:41.320626947Z" level=info msg="Removing container: 51dc964de92e7ee4291fb6ce4b39ea18a39fed7ab04c8928c2a3a956bad71e32" id=7f94d0df-e69d-4284-9cf3-b11714424ff1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:13:41 embed-certs-284604 crio[579]: time="2025-11-24T03:13:41.329003555Z" level=info msg="Removed container 51dc964de92e7ee4291fb6ce4b39ea18a39fed7ab04c8928c2a3a956bad71e32: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87/dashboard-metrics-scraper" id=7f94d0df-e69d-4284-9cf3-b11714424ff1 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.267869138Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8e010990-f57f-4c93-8e35-a409f67a9dce name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.268712511Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f713c98d-f2ac-41f4-a17d-662ff60cd72c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.269576224Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87/dashboard-metrics-scraper" id=a7209381-18d1-4039-aaac-7f1a5dbde526 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.269708129Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.275593601Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.276041298Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.306406953Z" level=info msg="Created container 8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87/dashboard-metrics-scraper" id=a7209381-18d1-4039-aaac-7f1a5dbde526 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.306810304Z" level=info msg="Starting container: 8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0" id=a3b816a0-417b-4f01-9e3c-02c0e7efa672 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.308513325Z" level=info msg="Started container" PID=1786 containerID=8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87/dashboard-metrics-scraper id=a3b816a0-417b-4f01-9e3c-02c0e7efa672 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c130d39491b124112a8af59d98589aeb6346eecbb7a22ed51df8ebda50a393d8
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.356370014Z" level=info msg="Removing container: cfd0841d7347980f87be5fbc1be5f112d4e719caa7acef25009d0b04b0c77100" id=13b6ccde-76ea-4f16-96be-3feab02626bf name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:13:57 embed-certs-284604 crio[579]: time="2025-11-24T03:13:57.364424158Z" level=info msg="Removed container cfd0841d7347980f87be5fbc1be5f112d4e719caa7acef25009d0b04b0c77100: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87/dashboard-metrics-scraper" id=13b6ccde-76ea-4f16-96be-3feab02626bf name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.367330859Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=bb277b43-3ec4-48f0-acc7-5092d2188d5c name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.368312909Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=8cbfa3c5-db34-40ba-a97e-f1c0093eddf3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.369298456Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=97bfb23f-1213-4dc5-8c8e-40964e7f0c88 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.369432191Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.374128556Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.374319186Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/09dc8e94e4b287ac9d0e2d9e7aaa3e4da235c835687a3e6646d22065c58d0bed/merged/etc/passwd: no such file or directory"
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.374344697Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/09dc8e94e4b287ac9d0e2d9e7aaa3e4da235c835687a3e6646d22065c58d0bed/merged/etc/group: no such file or directory"
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.37462913Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.405059636Z" level=info msg="Created container 7dfe56080d905fccce6df6410252e5de80a8471a616d9694d5005304fc5a8607: kube-system/storage-provisioner/storage-provisioner" id=97bfb23f-1213-4dc5-8c8e-40964e7f0c88 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.405616545Z" level=info msg="Starting container: 7dfe56080d905fccce6df6410252e5de80a8471a616d9694d5005304fc5a8607" id=bff1b36d-89b7-42f1-bd0d-4b41086fcc07 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 03:14:01 embed-certs-284604 crio[579]: time="2025-11-24T03:14:01.407665786Z" level=info msg="Started container" PID=1800 containerID=7dfe56080d905fccce6df6410252e5de80a8471a616d9694d5005304fc5a8607 description=kube-system/storage-provisioner/storage-provisioner id=bff1b36d-89b7-42f1-bd0d-4b41086fcc07 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ccfb1d626ba5f0ed6b630c3b86e9df642891dd98afe66369e741858c8793b1e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	7dfe56080d905       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   4ccfb1d626ba5       storage-provisioner                          kube-system
	8f8caf7133997       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           27 seconds ago      Exited              dashboard-metrics-scraper   2                   c130d39491b12       dashboard-metrics-scraper-6ffb444bf9-ftn87   kubernetes-dashboard
	c01f29a7b7ec8       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   907cd7bb38a4d       kubernetes-dashboard-855c9754f9-fbjrx        kubernetes-dashboard
	b8e427ddf2a88       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   6d4feca0b6558       busybox                                      default
	7c9de32e2a2a1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   183e25f5dcd97       coredns-66bc5c9577-89mzc                     kube-system
	d1fd18ad940c9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   4ccfb1d626ba5       storage-provisioner                          kube-system
	7f8f00c980f03       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   5d30f27c49f2c       kindnet-7tbg8                                kube-system
	e2e368c8131a6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   301a118fce7ea       kube-proxy-bn8fd                             kube-system
	bee45aa12c3da       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   527b65254b7f2       kube-apiserver-embed-certs-284604            kube-system
	bbf0291761013       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   b300b1fe3cf9b       kube-controller-manager-embed-certs-284604   kube-system
	d2ca966aa30cf       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   2a9b4be670473       kube-scheduler-embed-certs-284604            kube-system
	76f77f0607134       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   aa11e4fa7f25c       etcd-embed-certs-284604                      kube-system
	
	
	==> coredns [7c9de32e2a2a1a4bf1dcb74cc732ea9bf8940a38a5255b6219921850e280f953] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58231 - 4256 "HINFO IN 2377575116500461309.1616807941125280626. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.107311116s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-284604
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-284604
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=embed-certs-284604
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_12_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:12:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-284604
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:14:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:14:00 +0000   Mon, 24 Nov 2025 03:12:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:14:00 +0000   Mon, 24 Nov 2025 03:12:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:14:00 +0000   Mon, 24 Nov 2025 03:12:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:14:00 +0000   Mon, 24 Nov 2025 03:13:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    embed-certs-284604
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863352Ki
	  pods:               110
	System Info:
	  Machine ID:                 a6c8a789d6c7d69e45d665cd69238646
	  System UUID:                069cc4ec-f604-4b4c-a3d4-6c93aa172617
	  Boot ID:                    c838d98c-f1fa-41c1-a2b8-59b7c45047a0
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-89mzc                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-embed-certs-284604                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-7tbg8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-embed-certs-284604             250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-embed-certs-284604    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-bn8fd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-embed-certs-284604             100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-ftn87    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-fbjrx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 104s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  Starting                 111s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  110s               kubelet          Node embed-certs-284604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    110s               kubelet          Node embed-certs-284604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s               kubelet          Node embed-certs-284604 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           106s               node-controller  Node embed-certs-284604 event: Registered Node embed-certs-284604 in Controller
	  Normal  NodeReady                94s                kubelet          Node embed-certs-284604 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node embed-certs-284604 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node embed-certs-284604 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node embed-certs-284604 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node embed-certs-284604 event: Registered Node embed-certs-284604 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[ +13.372172] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f6 09 f3 ac 26 8b 08 06
	[  +0.000357] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 6a 16 36 31 53 24 08 06
	[  +9.087684] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[Nov24 03:10] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 4a 39 0e cc 7c 51 08 06
	[  +0.002342] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[  +2.640600] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a d1 9e f8 c4 c9 08 06
	[  +0.000320] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 52 92 28 ce 18 1d 08 06
	[  +6.764260] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	[  +7.557801] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff e6 71 69 e5 69 d7 08 06
	[  +0.000308] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9a 28 0f 10 ae dc 08 06
	[ +11.147525] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff aa e3 1d c7 4d 6a 08 06
	[  +0.000361] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a a7 49 67 d7 b7 08 06
	
	
	==> etcd [76f77f06071348df9904042948e1b1b6506e913800d6148de693cf689f54ff8b] <==
	{"level":"warn","ts":"2025-11-24T03:13:29.105262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.114131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.120359Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.126391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.133834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.140582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.148509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.155205Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.162827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.174975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.184907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.189763Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.196453Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.202032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.207824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.213436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.219240Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.224872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.230689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.236778Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.242640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.256492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.262217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.267909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:13:29.311954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48486","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:14:24 up  1:56,  0 user,  load average: 1.64, 3.38, 2.56
	Linux embed-certs-284604 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7f8f00c980f03b3a444b2377b90496745dbb07c9bd8f9baeb585c8435ae1c9dc] <==
	I1124 03:13:30.824297       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:13:30.824584       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 03:13:30.824759       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:13:30.824777       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:13:30.824803       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:13:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:13:31.024751       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:13:31.024846       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:13:31.024862       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:13:31.120508       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:13:31.420256       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:13:31.420277       1 metrics.go:72] Registering metrics
	I1124 03:13:31.420335       1 controller.go:711] "Syncing nftables rules"
	I1124 03:13:41.025584       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:13:41.025658       1 main.go:301] handling current node
	I1124 03:13:51.025356       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:13:51.025409       1 main.go:301] handling current node
	I1124 03:14:01.025373       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:14:01.025412       1 main.go:301] handling current node
	I1124 03:14:11.028229       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:14:11.028263       1 main.go:301] handling current node
	I1124 03:14:21.034325       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 03:14:21.034376       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bee45aa12c3da24d490c817cac60d2855a72aa70d2a66c610bbc0b141b008dbf] <==
	I1124 03:13:29.773459       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 03:13:29.773490       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:13:29.773447       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 03:13:29.773580       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 03:13:29.773756       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 03:13:29.774393       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 03:13:29.775507       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 03:13:29.775563       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:13:29.779866       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1124 03:13:29.780795       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 03:13:29.798433       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1124 03:13:29.813953       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 03:13:29.813973       1 policy_source.go:240] refreshing policies
	I1124 03:13:29.898926       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:13:30.024500       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:13:30.048409       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:13:30.062979       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:13:30.070119       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:13:30.075282       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:13:30.104286       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.121.191"}
	I1124 03:13:30.112842       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.49.227"}
	I1124 03:13:30.676788       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:13:33.211525       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:13:33.512795       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:13:33.611455       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [bbf02917610133c48abd17535a3d2ae4b7bf5f001204872f0f6c240d1a35d582] <==
	I1124 03:13:33.062156       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:13:33.065335       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 03:13:33.065373       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:13:33.079618       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 03:13:33.109516       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:13:33.109535       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 03:13:33.109581       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 03:13:33.109614       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 03:13:33.109629       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 03:13:33.109700       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:13:33.109721       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 03:13:33.109734       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 03:13:33.109790       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:13:33.109863       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:13:33.112281       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 03:13:33.113414       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 03:13:33.115659       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:13:33.115744       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 03:13:33.116920       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:13:33.119171       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:13:33.123455       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:13:33.123467       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:13:33.123474       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:13:33.130383       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:13:33.134678       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [e2e368c8131a6bdffbca9bf069eec5b0d46432a4f32e063227f4393352e1c12b] <==
	I1124 03:13:30.665177       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:13:30.724059       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:13:30.824415       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:13:30.824452       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 03:13:30.824594       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:13:30.844662       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:13:30.844710       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:13:30.850332       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:13:30.850773       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:13:30.850812       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:13:30.852333       1 config.go:200] "Starting service config controller"
	I1124 03:13:30.852357       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:13:30.852402       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:13:30.852435       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:13:30.852480       1 config.go:309] "Starting node config controller"
	I1124 03:13:30.852519       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:13:30.852526       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:13:30.852549       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:13:30.852574       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:13:30.952806       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:13:30.952825       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 03:13:30.952853       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [d2ca966aa30cf5ae6493816c664588714b29eded4d0b36ff92e650b04101b9da] <==
	I1124 03:13:28.711756       1 serving.go:386] Generated self-signed cert in-memory
	W1124 03:13:29.710385       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 03:13:29.710424       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [role.rbac.authorization.k8s.io "extension-apiserver-authentication-reader" not found, role.rbac.authorization.k8s.io "system::leader-locking-kube-scheduler" not found]
	W1124 03:13:29.710438       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 03:13:29.710448       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 03:13:29.733869       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 03:13:29.733941       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:13:29.736522       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:13:29.736565       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:13:29.736968       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 03:13:29.737089       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:13:29.837091       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:13:33 embed-certs-284604 kubelet[743]: I1124 03:13:33.825513     743 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmrbz\" (UniqueName: \"kubernetes.io/projected/3bee292f-a657-46ec-b8d0-f1389ede44cc-kube-api-access-jmrbz\") pod \"dashboard-metrics-scraper-6ffb444bf9-ftn87\" (UID: \"3bee292f-a657-46ec-b8d0-f1389ede44cc\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87"
	Nov 24 03:13:33 embed-certs-284604 kubelet[743]: I1124 03:13:33.825535     743 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvlht\" (UniqueName: \"kubernetes.io/projected/f370dc97-efc4-4903-a62a-e6af42b5f4f9-kube-api-access-qvlht\") pod \"kubernetes-dashboard-855c9754f9-fbjrx\" (UID: \"f370dc97-efc4-4903-a62a-e6af42b5f4f9\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbjrx"
	Nov 24 03:13:37 embed-certs-284604 kubelet[743]: I1124 03:13:37.167562     743 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 03:13:38 embed-certs-284604 kubelet[743]: I1124 03:13:38.322043     743 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-fbjrx" podStartSLOduration=1.9262234710000001 podStartE2EDuration="5.322022304s" podCreationTimestamp="2025-11-24 03:13:33 +0000 UTC" firstStartedPulling="2025-11-24 03:13:34.060821149 +0000 UTC m=+6.873562205" lastFinishedPulling="2025-11-24 03:13:37.456619968 +0000 UTC m=+10.269361038" observedRunningTime="2025-11-24 03:13:38.321572446 +0000 UTC m=+11.134313531" watchObservedRunningTime="2025-11-24 03:13:38.322022304 +0000 UTC m=+11.134763376"
	Nov 24 03:13:40 embed-certs-284604 kubelet[743]: I1124 03:13:40.315749     743 scope.go:117] "RemoveContainer" containerID="51dc964de92e7ee4291fb6ce4b39ea18a39fed7ab04c8928c2a3a956bad71e32"
	Nov 24 03:13:41 embed-certs-284604 kubelet[743]: I1124 03:13:41.319346     743 scope.go:117] "RemoveContainer" containerID="51dc964de92e7ee4291fb6ce4b39ea18a39fed7ab04c8928c2a3a956bad71e32"
	Nov 24 03:13:41 embed-certs-284604 kubelet[743]: I1124 03:13:41.319513     743 scope.go:117] "RemoveContainer" containerID="cfd0841d7347980f87be5fbc1be5f112d4e719caa7acef25009d0b04b0c77100"
	Nov 24 03:13:41 embed-certs-284604 kubelet[743]: E1124 03:13:41.319740     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ftn87_kubernetes-dashboard(3bee292f-a657-46ec-b8d0-f1389ede44cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87" podUID="3bee292f-a657-46ec-b8d0-f1389ede44cc"
	Nov 24 03:13:42 embed-certs-284604 kubelet[743]: I1124 03:13:42.322687     743 scope.go:117] "RemoveContainer" containerID="cfd0841d7347980f87be5fbc1be5f112d4e719caa7acef25009d0b04b0c77100"
	Nov 24 03:13:42 embed-certs-284604 kubelet[743]: E1124 03:13:42.322917     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ftn87_kubernetes-dashboard(3bee292f-a657-46ec-b8d0-f1389ede44cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87" podUID="3bee292f-a657-46ec-b8d0-f1389ede44cc"
	Nov 24 03:13:43 embed-certs-284604 kubelet[743]: I1124 03:13:43.347128     743 scope.go:117] "RemoveContainer" containerID="cfd0841d7347980f87be5fbc1be5f112d4e719caa7acef25009d0b04b0c77100"
	Nov 24 03:13:43 embed-certs-284604 kubelet[743]: E1124 03:13:43.347272     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ftn87_kubernetes-dashboard(3bee292f-a657-46ec-b8d0-f1389ede44cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87" podUID="3bee292f-a657-46ec-b8d0-f1389ede44cc"
	Nov 24 03:13:57 embed-certs-284604 kubelet[743]: I1124 03:13:57.267486     743 scope.go:117] "RemoveContainer" containerID="cfd0841d7347980f87be5fbc1be5f112d4e719caa7acef25009d0b04b0c77100"
	Nov 24 03:13:57 embed-certs-284604 kubelet[743]: I1124 03:13:57.355227     743 scope.go:117] "RemoveContainer" containerID="cfd0841d7347980f87be5fbc1be5f112d4e719caa7acef25009d0b04b0c77100"
	Nov 24 03:13:57 embed-certs-284604 kubelet[743]: I1124 03:13:57.355392     743 scope.go:117] "RemoveContainer" containerID="8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0"
	Nov 24 03:13:57 embed-certs-284604 kubelet[743]: E1124 03:13:57.355560     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ftn87_kubernetes-dashboard(3bee292f-a657-46ec-b8d0-f1389ede44cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87" podUID="3bee292f-a657-46ec-b8d0-f1389ede44cc"
	Nov 24 03:14:01 embed-certs-284604 kubelet[743]: I1124 03:14:01.366913     743 scope.go:117] "RemoveContainer" containerID="d1fd18ad940c962cf45cd1bcc24444e576f59c99eaf790532d0fef509627de0c"
	Nov 24 03:14:03 embed-certs-284604 kubelet[743]: I1124 03:14:03.347845     743 scope.go:117] "RemoveContainer" containerID="8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0"
	Nov 24 03:14:03 embed-certs-284604 kubelet[743]: E1124 03:14:03.348088     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ftn87_kubernetes-dashboard(3bee292f-a657-46ec-b8d0-f1389ede44cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87" podUID="3bee292f-a657-46ec-b8d0-f1389ede44cc"
	Nov 24 03:14:14 embed-certs-284604 kubelet[743]: I1124 03:14:14.267905     743 scope.go:117] "RemoveContainer" containerID="8f8caf71339973e50ecb5df251da16c7ce4c1e7c78da8616f7233b8a55df0ef0"
	Nov 24 03:14:14 embed-certs-284604 kubelet[743]: E1124 03:14:14.268081     743 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-ftn87_kubernetes-dashboard(3bee292f-a657-46ec-b8d0-f1389ede44cc)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-ftn87" podUID="3bee292f-a657-46ec-b8d0-f1389ede44cc"
	Nov 24 03:14:20 embed-certs-284604 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 03:14:20 embed-certs-284604 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 03:14:20 embed-certs-284604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 03:14:20 embed-certs-284604 systemd[1]: kubelet.service: Consumed 1.427s CPU time.
	
	
	==> kubernetes-dashboard [c01f29a7b7ec8871b22514ceac1950d3ab216fa11d1e3c795917584a750a2e70] <==
	2025/11/24 03:13:37 Starting overwatch
	2025/11/24 03:13:37 Using namespace: kubernetes-dashboard
	2025/11/24 03:13:37 Using in-cluster config to connect to apiserver
	2025/11/24 03:13:37 Using secret token for csrf signing
	2025/11/24 03:13:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 03:13:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 03:13:37 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 03:13:37 Generating JWE encryption key
	2025/11/24 03:13:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 03:13:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 03:13:37 Initializing JWE encryption key from synchronized object
	2025/11/24 03:13:37 Creating in-cluster Sidecar client
	2025/11/24 03:13:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 03:13:37 Serving insecurely on HTTP port: 9090
	2025/11/24 03:14:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [7dfe56080d905fccce6df6410252e5de80a8471a616d9694d5005304fc5a8607] <==
	I1124 03:14:01.419597       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:14:01.426576       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:14:01.426610       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:14:01.428141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:04.883091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:09.143470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:12.741844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:15.795699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:18.817404       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:18.821331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:14:18.821459       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:14:18.821604       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"49bb5acd-171e-4aa8-8356-6bac5deb0205", APIVersion:"v1", ResourceVersion:"633", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-284604_3125ec58-ae72-4152-85f4-98843fc71951 became leader
	I1124 03:14:18.821640       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-284604_3125ec58-ae72-4152-85f4-98843fc71951!
	W1124 03:14:18.823268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:18.826518       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:14:18.921853       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-284604_3125ec58-ae72-4152-85f4-98843fc71951!
	W1124 03:14:20.829439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:20.832825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:22.836753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:22.840536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:24.844308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:14:24.848961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [d1fd18ad940c962cf45cd1bcc24444e576f59c99eaf790532d0fef509627de0c] <==
	I1124 03:13:30.639428       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 03:14:00.641455       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-284604 -n embed-certs-284604
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-284604 -n embed-certs-284604: exit status 2 (314.754106ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-284604 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.12s)

                                                
                                    

Test pass (264/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.94
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.13
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.38
21 TestBinaryMirror 0.8
22 TestOffline 49.45
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 124.21
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 8.39
48 TestAddons/StoppedEnableDisable 16.62
49 TestCertOptions 33.93
50 TestCertExpiration 210.73
52 TestForceSystemdFlag 25.47
53 TestForceSystemdEnv 29.08
58 TestErrorSpam/setup 24.03
59 TestErrorSpam/start 0.65
60 TestErrorSpam/status 0.93
61 TestErrorSpam/pause 5.47
62 TestErrorSpam/unpause 5.42
63 TestErrorSpam/stop 12.57
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 35.61
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.08
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.57
75 TestFunctional/serial/CacheCmd/cache/add_local 1.16
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 41.88
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.13
86 TestFunctional/serial/LogsFileCmd 1.15
87 TestFunctional/serial/InvalidService 4.25
89 TestFunctional/parallel/ConfigCmd 0.44
90 TestFunctional/parallel/DashboardCmd 7.21
91 TestFunctional/parallel/DryRun 0.4
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 1
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 22.49
101 TestFunctional/parallel/SSHCmd 0.6
102 TestFunctional/parallel/CpCmd 2.04
103 TestFunctional/parallel/MySQL 18.05
104 TestFunctional/parallel/FileSync 0.31
105 TestFunctional/parallel/CertSync 1.92
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.61
113 TestFunctional/parallel/License 0.46
114 TestFunctional/parallel/Version/short 0.1
115 TestFunctional/parallel/Version/components 0.68
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
120 TestFunctional/parallel/ImageCommands/ImageBuild 2.24
121 TestFunctional/parallel/ImageCommands/Setup 1.23
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
127 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.51
130 TestFunctional/parallel/ProfileCmd/profile_list 0.55
131 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
133 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.28
134 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
137 TestFunctional/parallel/ImageCommands/ImageRemove 1.26
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/MountCmd/any-port 5.81
148 TestFunctional/parallel/MountCmd/specific-port 1.6
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.72
150 TestFunctional/parallel/ServiceCmd/List 1.69
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 163.89
163 TestMultiControlPlane/serial/DeployApp 5.18
164 TestMultiControlPlane/serial/PingHostFromPods 1.01
165 TestMultiControlPlane/serial/AddWorkerNode 26.28
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
168 TestMultiControlPlane/serial/CopyFile 16.94
169 TestMultiControlPlane/serial/StopSecondaryNode 13.25
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
171 TestMultiControlPlane/serial/RestartSecondaryNode 8.31
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 97.76
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.47
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
176 TestMultiControlPlane/serial/StopCluster 47.64
177 TestMultiControlPlane/serial/RestartCluster 55.91
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.68
179 TestMultiControlPlane/serial/AddSecondaryNode 84.4
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
185 TestJSONOutput/start/Command 40.38
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.12
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.22
210 TestKicCustomNetwork/create_custom_network 28.48
211 TestKicCustomNetwork/use_default_bridge_network 24.96
212 TestKicExistingNetwork 25.34
213 TestKicCustomSubnet 24.47
214 TestKicStaticIP 25.46
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 50.76
219 TestMountStart/serial/StartWithMountFirst 4.78
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 7.7
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.64
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 7.17
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 96.12
231 TestMultiNode/serial/DeployApp2Nodes 2.84
232 TestMultiNode/serial/PingHostFrom2Pods 0.69
233 TestMultiNode/serial/AddNode 26.3
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.64
236 TestMultiNode/serial/CopyFile 9.59
237 TestMultiNode/serial/StopNode 2.23
238 TestMultiNode/serial/StartAfterStop 7.01
239 TestMultiNode/serial/RestartKeepsNodes 57.93
240 TestMultiNode/serial/DeleteNode 4.94
241 TestMultiNode/serial/StopMultiNode 17.59
242 TestMultiNode/serial/RestartMultiNode 41.81
243 TestMultiNode/serial/ValidateNameConflict 23.41
248 TestPreload 84.72
250 TestScheduledStopUnix 99.9
253 TestInsufficientStorage 9.39
254 TestRunningBinaryUpgrade 43.48
256 TestKubernetesUpgrade 309.74
257 TestMissingContainerUpgrade 109.79
259 TestPause/serial/Start 47.94
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
262 TestNoKubernetes/serial/StartWithK8s 29.66
263 TestNoKubernetes/serial/StartWithStopK8s 18.66
264 TestPause/serial/SecondStartNoReconfiguration 8.19
265 TestNoKubernetes/serial/Start 7.52
266 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
267 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
269 TestNoKubernetes/serial/ProfileList 1.87
270 TestNoKubernetes/serial/Stop 1.26
271 TestNoKubernetes/serial/StartNoArgs 6.91
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
280 TestNetworkPlugins/group/false 5.76
284 TestStoppedBinaryUpgrade/Setup 0.57
285 TestStoppedBinaryUpgrade/Upgrade 99.93
286 TestStoppedBinaryUpgrade/MinikubeLogs 0.95
294 TestNetworkPlugins/group/auto/Start 41.36
295 TestNetworkPlugins/group/kindnet/Start 40.58
296 TestNetworkPlugins/group/auto/KubeletFlags 0.3
297 TestNetworkPlugins/group/auto/NetCatPod 9.19
298 TestNetworkPlugins/group/auto/DNS 0.15
299 TestNetworkPlugins/group/auto/Localhost 0.1
300 TestNetworkPlugins/group/auto/HairPin 0.13
301 TestNetworkPlugins/group/calico/Start 54.5
302 TestNetworkPlugins/group/custom-flannel/Start 51.41
303 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
304 TestNetworkPlugins/group/kindnet/KubeletFlags 0.36
305 TestNetworkPlugins/group/kindnet/NetCatPod 9.23
306 TestNetworkPlugins/group/kindnet/DNS 0.11
307 TestNetworkPlugins/group/kindnet/Localhost 0.09
308 TestNetworkPlugins/group/kindnet/HairPin 0.1
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/enable-default-cni/Start 63.31
311 TestNetworkPlugins/group/calico/KubeletFlags 0.31
312 TestNetworkPlugins/group/calico/NetCatPod 11.17
313 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.31
314 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.19
315 TestNetworkPlugins/group/calico/DNS 0.16
316 TestNetworkPlugins/group/calico/Localhost 0.12
317 TestNetworkPlugins/group/calico/HairPin 0.11
318 TestNetworkPlugins/group/custom-flannel/DNS 0.12
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
321 TestNetworkPlugins/group/flannel/Start 48.11
322 TestNetworkPlugins/group/bridge/Start 36.28
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.21
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.12
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.08
328 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
329 TestNetworkPlugins/group/bridge/NetCatPod 8.19
330 TestNetworkPlugins/group/flannel/ControllerPod 6.01
331 TestNetworkPlugins/group/bridge/DNS 0.11
332 TestNetworkPlugins/group/bridge/Localhost 0.08
333 TestNetworkPlugins/group/bridge/HairPin 0.09
334 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
335 TestNetworkPlugins/group/flannel/NetCatPod 9.18
337 TestStartStop/group/old-k8s-version/serial/FirstStart 53.76
338 TestNetworkPlugins/group/flannel/DNS 0.11
339 TestNetworkPlugins/group/flannel/Localhost 0.11
340 TestNetworkPlugins/group/flannel/HairPin 0.11
342 TestStartStop/group/no-preload/serial/FirstStart 55.73
344 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 48.37
346 TestStartStop/group/newest-cni/serial/FirstStart 31.7
347 TestStartStop/group/old-k8s-version/serial/DeployApp 9.26
348 TestStartStop/group/newest-cni/serial/DeployApp 0
351 TestStartStop/group/newest-cni/serial/Stop 18.02
352 TestStartStop/group/old-k8s-version/serial/Stop 15.99
353 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 7.2
354 TestStartStop/group/no-preload/serial/DeployApp 7.24
357 TestStartStop/group/default-k8s-diff-port/serial/Stop 18.13
358 TestStartStop/group/no-preload/serial/Stop 18.25
359 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
360 TestStartStop/group/old-k8s-version/serial/SecondStart 48.65
361 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
362 TestStartStop/group/newest-cni/serial/SecondStart 10.87
363 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
364 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
365 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
367 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
368 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 49.95
369 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
370 TestStartStop/group/no-preload/serial/SecondStart 51.63
372 TestStartStop/group/embed-certs/serial/FirstStart 45.25
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
377 TestStartStop/group/embed-certs/serial/DeployApp 8.22
378 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
379 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
380 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
382 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
383 TestStartStop/group/embed-certs/serial/Stop 16.2
384 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
386 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
388 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
389 TestStartStop/group/embed-certs/serial/SecondStart 48.04
390 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
391 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
392 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
x
+
TestDownloadOnly/v1.28.0/json-events (4.94s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-539155 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-539155 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.944220398s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.94s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1124 02:24:05.931326  349078 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1124 02:24:05.931416  349078 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-539155
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-539155: exit status 85 (73.30854ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-539155 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-539155 │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:24:01
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:24:01.039261  349090 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:24:01.039355  349090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:24:01.039363  349090 out.go:374] Setting ErrFile to fd 2...
	I1124 02:24:01.039367  349090 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:24:01.039551  349090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	W1124 02:24:01.039658  349090 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21975-345525/.minikube/config/config.json: open /home/jenkins/minikube-integration/21975-345525/.minikube/config/config.json: no such file or directory
	I1124 02:24:01.040124  349090 out.go:368] Setting JSON to true
	I1124 02:24:01.041052  349090 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3988,"bootTime":1763947053,"procs":231,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:24:01.041110  349090 start.go:143] virtualization: kvm guest
	I1124 02:24:01.044445  349090 out.go:99] [download-only-539155] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1124 02:24:01.044577  349090 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball: no such file or directory
	I1124 02:24:01.044630  349090 notify.go:221] Checking for updates...
	I1124 02:24:01.045567  349090 out.go:171] MINIKUBE_LOCATION=21975
	I1124 02:24:01.046541  349090 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:24:01.047620  349090 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 02:24:01.048522  349090 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 02:24:01.049439  349090 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 02:24:01.051170  349090 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 02:24:01.051398  349090 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:24:01.075397  349090 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:24:01.075504  349090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:24:01.130144  349090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-24 02:24:01.121053429 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:24:01.130254  349090 docker.go:319] overlay module found
	I1124 02:24:01.131670  349090 out.go:99] Using the docker driver based on user configuration
	I1124 02:24:01.131702  349090 start.go:309] selected driver: docker
	I1124 02:24:01.131711  349090 start.go:927] validating driver "docker" against <nil>
	I1124 02:24:01.131790  349090 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:24:01.184207  349090 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-24 02:24:01.175186711 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:24:01.184369  349090 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 02:24:01.184921  349090 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 02:24:01.185071  349090 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 02:24:01.186502  349090 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-539155 host does not exist
	  To start a cluster, run: "minikube start -p download-only-539155"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-539155
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-550393 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-550393 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.130471331s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1124 02:24:10.484471  349078 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1124 02:24:10.484506  349078 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-550393
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-550393: exit status 85 (70.427944ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-539155 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-539155 │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:24 UTC │
	│ delete  │ -p download-only-539155                                                                                                                                                   │ download-only-539155 │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │ 24 Nov 25 02:24 UTC │
	│ start   │ -o=json --download-only -p download-only-550393 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-550393 │ jenkins │ v1.37.0 │ 24 Nov 25 02:24 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:24:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:24:06.404824  349448 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:24:06.405092  349448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:24:06.405101  349448 out.go:374] Setting ErrFile to fd 2...
	I1124 02:24:06.405105  349448 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:24:06.405303  349448 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:24:06.405711  349448 out.go:368] Setting JSON to true
	I1124 02:24:06.406641  349448 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":3993,"bootTime":1763947053,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:24:06.406689  349448 start.go:143] virtualization: kvm guest
	I1124 02:24:06.408219  349448 out.go:99] [download-only-550393] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:24:06.408400  349448 notify.go:221] Checking for updates...
	I1124 02:24:06.409479  349448 out.go:171] MINIKUBE_LOCATION=21975
	I1124 02:24:06.410934  349448 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:24:06.412015  349448 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 02:24:06.413106  349448 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 02:24:06.414059  349448 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 02:24:06.415865  349448 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 02:24:06.416062  349448 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:24:06.438479  349448 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:24:06.438576  349448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:24:06.493539  349448 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-24 02:24:06.484634712 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:24:06.493645  349448 docker.go:319] overlay module found
	I1124 02:24:06.495033  349448 out.go:99] Using the docker driver based on user configuration
	I1124 02:24:06.495071  349448 start.go:309] selected driver: docker
	I1124 02:24:06.495076  349448 start.go:927] validating driver "docker" against <nil>
	I1124 02:24:06.495152  349448 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:24:06.548435  349448 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-24 02:24:06.539238006 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:24:06.548588  349448 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 02:24:06.549108  349448 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 02:24:06.549257  349448 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 02:24:06.550747  349448 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-550393 host does not exist
	  To start a cluster, run: "minikube start -p download-only-550393"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-550393
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.38s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-371720 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-371720" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-371720
--- PASS: TestDownloadOnlyKic (0.38s)

                                                
                                    
x
+
TestBinaryMirror (0.8s)

                                                
                                                
=== RUN   TestBinaryMirror
I1124 02:24:11.570463  349078 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-907926 --alsologtostderr --binary-mirror http://127.0.0.1:38615 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-907926" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-907926
--- PASS: TestBinaryMirror (0.80s)

                                                
                                    
x
+
TestOffline (49.45s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-493654 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-493654 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (46.879212326s)
helpers_test.go:175: Cleaning up "offline-crio-493654" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-493654
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-493654: (2.568574752s)
--- PASS: TestOffline (49.45s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-831846
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-831846: exit status 85 (72.08365ms)

                                                
                                                
-- stdout --
	* Profile "addons-831846" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-831846"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-831846
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-831846: exit status 85 (72.748303ms)

                                                
                                                
-- stdout --
	* Profile "addons-831846" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-831846"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (124.21s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-831846 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-831846 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m4.208213598s)
--- PASS: TestAddons/Setup (124.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-831846 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-831846 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.39s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-831846 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-831846 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1f1ea0f0-3e69-4c29-a085-19c46e304737] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1f1ea0f0-3e69-4c29-a085-19c46e304737] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003210251s
addons_test.go:694: (dbg) Run:  kubectl --context addons-831846 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-831846 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-831846 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.62s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-831846
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-831846: (16.335291125s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-831846
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-831846
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-831846
--- PASS: TestAddons/StoppedEnableDisable (16.62s)

                                                
                                    
x
+
TestCertOptions (33.93s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-575542 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-575542 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (29.494072797s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-575542 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-575542 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-575542 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-575542" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-575542
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-575542: (3.776915115s)
--- PASS: TestCertOptions (33.93s)

                                                
                                    
x
+
TestCertExpiration (210.73s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-062725 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-062725 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (22.692520565s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-062725 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-062725 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.49281717s)
helpers_test.go:175: Cleaning up "cert-expiration-062725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-062725
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-062725: (2.538415678s)
--- PASS: TestCertExpiration (210.73s)

                                                
                                    
x
+
TestForceSystemdFlag (25.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-597158 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-597158 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.196713192s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-597158 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-597158" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-597158
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-597158: (2.87686994s)
--- PASS: TestForceSystemdFlag (25.47s)

                                                
                                    
x
+
TestForceSystemdEnv (29.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-550049 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-550049 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.342473658s)
helpers_test.go:175: Cleaning up "force-systemd-env-550049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-550049
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-550049: (2.732022503s)
--- PASS: TestForceSystemdEnv (29.08s)

                                                
                                    
x
+
TestErrorSpam/setup (24.03s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-421104 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-421104 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-421104 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-421104 --driver=docker  --container-runtime=crio: (24.029651578s)
--- PASS: TestErrorSpam/setup (24.03s)

                                                
                                    
x
+
TestErrorSpam/start (0.65s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 start --dry-run
--- PASS: TestErrorSpam/start (0.65s)

                                                
                                    
x
+
TestErrorSpam/status (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 status
--- PASS: TestErrorSpam/status (0.93s)

                                                
                                    
x
+
TestErrorSpam/pause (5.47s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 pause: exit status 80 (2.398448406s)

                                                
                                                
-- stdout --
	* Pausing node nospam-421104 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:29:56Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 pause: exit status 80 (1.536776328s)

                                                
                                                
-- stdout --
	* Pausing node nospam-421104 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:29:58Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 pause: exit status 80 (1.531205419s)

                                                
                                                
-- stdout --
	* Pausing node nospam-421104 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:29:59Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.47s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.42s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 unpause: exit status 80 (1.342731002s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-421104 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:30:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 unpause: exit status 80 (2.232338042s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-421104 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:30:03Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 unpause: exit status 80 (1.841959989s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-421104 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T02:30:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.42s)

                                                
                                    
x
+
TestErrorSpam/stop (12.57s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 stop: (12.372035196s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-421104 --log_dir /tmp/nospam-421104 stop
--- PASS: TestErrorSpam/stop (12.57s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21975-345525/.minikube/files/etc/test/nested/copy/349078/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (35.61s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-333040 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-333040 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (35.613201151s)
--- PASS: TestFunctional/serial/StartWithProxy (35.61s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1124 02:30:57.186483  349078 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-333040 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-333040 --alsologtostderr -v=8: (6.075767386s)
functional_test.go:678: soft start took 6.076763106s for "functional-333040" cluster.
I1124 02:31:03.263054  349078 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-333040 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.57s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.57s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-333040 /tmp/TestFunctionalserialCacheCmdcacheadd_local1660290533/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 cache add minikube-local-cache-test:functional-333040
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 cache delete minikube-local-cache-test:functional-333040
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-333040
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-333040 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (282.430419ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 kubectl -- --context functional-333040 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-333040 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (41.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-333040 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1124 02:31:17.181070  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:17.187457  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:17.198781  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:17.220109  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:17.261415  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:17.342809  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:17.504272  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:17.825932  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:18.467960  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:19.749563  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:22.312409  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:27.434404  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:31:37.676016  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-333040 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.882068912s)
functional_test.go:776: restart took 41.882188212s for "functional-333040" cluster.
I1124 02:31:51.231849  349078 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (41.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-333040 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-333040 logs: (1.130039393s)
--- PASS: TestFunctional/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 logs --file /tmp/TestFunctionalserialLogsFileCmd740265040/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-333040 logs --file /tmp/TestFunctionalserialLogsFileCmd740265040/001/logs.txt: (1.145312619s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.15s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-333040 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-333040
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-333040: exit status 115 (336.333859ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30414 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-333040 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.25s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-333040 config get cpus: exit status 14 (81.945849ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-333040 config get cpus: exit status 14 (85.726611ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-333040 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-333040 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 388983: os: process already finished
E1124 02:32:39.119766  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:34:01.041633  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:36:17.180217  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:36:44.883185  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:41:17.180318  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/DashboardCmd (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-333040 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-333040 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (168.719127ms)

                                                
                                                
-- stdout --
	* [functional-333040] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:32:25.830054  388137 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:32:25.830339  388137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:25.830350  388137 out.go:374] Setting ErrFile to fd 2...
	I1124 02:32:25.830354  388137 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:25.830604  388137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:32:25.831092  388137 out.go:368] Setting JSON to false
	I1124 02:32:25.832089  388137 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4493,"bootTime":1763947053,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:32:25.832143  388137 start.go:143] virtualization: kvm guest
	I1124 02:32:25.834422  388137 out.go:179] * [functional-333040] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 02:32:25.835600  388137 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:32:25.835611  388137 notify.go:221] Checking for updates...
	I1124 02:32:25.837539  388137 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:32:25.838554  388137 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 02:32:25.839573  388137 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 02:32:25.840528  388137 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:32:25.841504  388137 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:32:25.842847  388137 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:32:25.843568  388137 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:32:25.870468  388137 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:32:25.870563  388137 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:32:25.927516  388137 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:25.916758834 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:32:25.927622  388137 docker.go:319] overlay module found
	I1124 02:32:25.928970  388137 out.go:179] * Using the docker driver based on existing profile
	I1124 02:32:25.929940  388137 start.go:309] selected driver: docker
	I1124 02:32:25.929962  388137 start.go:927] validating driver "docker" against &{Name:functional-333040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-333040 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:32:25.930048  388137 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:32:25.932306  388137 out.go:203] 
	W1124 02:32:25.933124  388137 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 02:32:25.933989  388137 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-333040 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-333040 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-333040 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (161.312821ms)

                                                
                                                
-- stdout --
	* [functional-333040] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:32:16.537166  385414 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:32:16.537254  385414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:16.537261  385414 out.go:374] Setting ErrFile to fd 2...
	I1124 02:32:16.537265  385414 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:32:16.537544  385414 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:32:16.537923  385414 out.go:368] Setting JSON to false
	I1124 02:32:16.538858  385414 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":4483,"bootTime":1763947053,"procs":238,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 02:32:16.538921  385414 start.go:143] virtualization: kvm guest
	I1124 02:32:16.543506  385414 out.go:179] * [functional-333040] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 02:32:16.544724  385414 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 02:32:16.544703  385414 notify.go:221] Checking for updates...
	I1124 02:32:16.545676  385414 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:32:16.546630  385414 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 02:32:16.547607  385414 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 02:32:16.548471  385414 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 02:32:16.549434  385414 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 02:32:16.550659  385414 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:32:16.551185  385414 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:32:16.574777  385414 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 02:32:16.574954  385414 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:32:16.629923  385414 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 02:32:16.620985795 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:32:16.630023  385414 docker.go:319] overlay module found
	I1124 02:32:16.631288  385414 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 02:32:16.632305  385414 start.go:309] selected driver: docker
	I1124 02:32:16.632319  385414 start.go:927] validating driver "docker" against &{Name:functional-333040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-333040 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:32:16.632404  385414 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 02:32:16.633793  385414 out.go:203] 
	W1124 02:32:16.634738  385414 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 02:32:16.635592  385414 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [76e52d7f-173a-48f6-921d-983209c84bee] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003489222s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-333040 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-333040 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-333040 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-333040 apply -f testdata/storage-provisioner/pod.yaml
I1124 02:32:07.970843  349078 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [148ecc51-9112-4104-8a2c-ad44fb3daf4d] Pending
helpers_test.go:352: "sp-pod" [148ecc51-9112-4104-8a2c-ad44fb3daf4d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [148ecc51-9112-4104-8a2c-ad44fb3daf4d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003319161s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-333040 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-333040 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-333040 apply -f testdata/storage-provisioner/pod.yaml
I1124 02:32:18.015318  349078 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [c4e8929b-73ef-493f-959e-4f1405f28db5] Pending
helpers_test.go:352: "sp-pod" [c4e8929b-73ef-493f-959e-4f1405f28db5] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [c4e8929b-73ef-493f-959e-4f1405f28db5] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004396569s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-333040 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh -n functional-333040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 cp functional-333040:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1340657662/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh -n functional-333040 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh -n functional-333040 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (18.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-333040 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-xgm98" [cb090745-021d-42fb-b80a-5b42d4273ce1] Pending
helpers_test.go:352: "mysql-5bb876957f-xgm98" [cb090745-021d-42fb-b80a-5b42d4273ce1] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-xgm98" [cb090745-021d-42fb-b80a-5b42d4273ce1] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 13.003620126s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-333040 exec mysql-5bb876957f-xgm98 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-333040 exec mysql-5bb876957f-xgm98 -- mysql -ppassword -e "show databases;": exit status 1 (92.573309ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 02:32:11.712285  349078 retry.go:31] will retry after 1.326701776s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-333040 exec mysql-5bb876957f-xgm98 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-333040 exec mysql-5bb876957f-xgm98 -- mysql -ppassword -e "show databases;": exit status 1 (86.399923ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 02:32:13.126384  349078 retry.go:31] will retry after 994.169967ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-333040 exec mysql-5bb876957f-xgm98 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-333040 exec mysql-5bb876957f-xgm98 -- mysql -ppassword -e "show databases;": exit status 1 (92.886861ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 02:32:14.214933  349078 retry.go:31] will retry after 2.180891654s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-333040 exec mysql-5bb876957f-xgm98 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (18.05s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/349078/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "sudo cat /etc/test/nested/copy/349078/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/349078.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "sudo cat /etc/ssl/certs/349078.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/349078.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "sudo cat /usr/share/ca-certificates/349078.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3490782.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "sudo cat /etc/ssl/certs/3490782.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3490782.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "sudo cat /usr/share/ca-certificates/3490782.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-333040 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-333040 ssh "sudo systemctl is-active docker": exit status 1 (305.573902ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "sudo systemctl is-active containerd"
E1124 02:31:58.157992  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-333040 ssh "sudo systemctl is-active containerd": exit status 1 (301.798909ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-333040 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-333040 image ls --format short --alsologtostderr:
I1124 02:32:26.673909  388844 out.go:360] Setting OutFile to fd 1 ...
I1124 02:32:26.674190  388844 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:32:26.674201  388844 out.go:374] Setting ErrFile to fd 2...
I1124 02:32:26.674206  388844 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:32:26.674408  388844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
I1124 02:32:26.674919  388844 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:32:26.675022  388844 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:32:26.675465  388844 cli_runner.go:164] Run: docker container inspect functional-333040 --format={{.State.Status}}
I1124 02:32:26.693194  388844 ssh_runner.go:195] Run: systemctl --version
I1124 02:32:26.693237  388844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333040
I1124 02:32:26.713016  388844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/functional-333040/id_rsa Username:docker}
I1124 02:32:26.815950  388844 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-333040 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-333040 image ls --format table --alsologtostderr:
I1124 02:32:27.386606  389046 out.go:360] Setting OutFile to fd 1 ...
I1124 02:32:27.386720  389046 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:32:27.386730  389046 out.go:374] Setting ErrFile to fd 2...
I1124 02:32:27.386737  389046 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:32:27.386996  389046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
I1124 02:32:27.387589  389046 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:32:27.387716  389046 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:32:27.389414  389046 cli_runner.go:164] Run: docker container inspect functional-333040 --format={{.State.Status}}
I1124 02:32:27.407861  389046 ssh_runner.go:195] Run: systemctl --version
I1124 02:32:27.407924  389046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333040
I1124 02:32:27.424431  389046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/functional-333040/id_rsa Username:docker}
I1124 02:32:27.521073  389046 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-333040 image ls --format json --alsologtostderr:
[{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad04
5384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686
139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe
9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDige
sts":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":
"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-333040 image ls --format json --alsologtostderr:
I1124 02:32:27.151114  388977 out.go:360] Setting OutFile to fd 1 ...
I1124 02:32:27.151219  388977 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:32:27.151230  388977 out.go:374] Setting ErrFile to fd 2...
I1124 02:32:27.151234  388977 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:32:27.151467  388977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
I1124 02:32:27.152015  388977 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:32:27.152165  388977 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:32:27.152765  388977 cli_runner.go:164] Run: docker container inspect functional-333040 --format={{.State.Status}}
I1124 02:32:27.172040  388977 ssh_runner.go:195] Run: systemctl --version
I1124 02:32:27.172091  388977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333040
I1124 02:32:27.189654  388977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/functional-333040/id_rsa Username:docker}
I1124 02:32:27.292859  388977 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-333040 image ls --format yaml --alsologtostderr:
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-333040 image ls --format yaml --alsologtostderr:
I1124 02:32:26.909786  388922 out.go:360] Setting OutFile to fd 1 ...
I1124 02:32:26.910099  388922 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:32:26.910113  388922 out.go:374] Setting ErrFile to fd 2...
I1124 02:32:26.910120  388922 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:32:26.910404  388922 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
I1124 02:32:26.911247  388922 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:32:26.911404  388922 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:32:26.912046  388922 cli_runner.go:164] Run: docker container inspect functional-333040 --format={{.State.Status}}
I1124 02:32:26.930350  388922 ssh_runner.go:195] Run: systemctl --version
I1124 02:32:26.930409  388922 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333040
I1124 02:32:26.949113  388922 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/functional-333040/id_rsa Username:docker}
I1124 02:32:27.051389  388922 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-333040 ssh pgrep buildkitd: exit status 1 (265.573543ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image build -t localhost/my-image:functional-333040 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-333040 image build -t localhost/my-image:functional-333040 testdata/build --alsologtostderr: (1.708808657s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-333040 image build -t localhost/my-image:functional-333040 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 7a7f717b3f2
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-333040
--> b9bb16e2b6a
Successfully tagged localhost/my-image:functional-333040
b9bb16e2b6aab055b31d81b97ea7f4a8d7e0bba8ba830614fd02cf17548b9b1b
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-333040 image build -t localhost/my-image:functional-333040 testdata/build --alsologtostderr:
I1124 02:32:27.872962  389258 out.go:360] Setting OutFile to fd 1 ...
I1124 02:32:27.873221  389258 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:32:27.873230  389258 out.go:374] Setting ErrFile to fd 2...
I1124 02:32:27.873235  389258 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 02:32:27.873425  389258 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
I1124 02:32:27.873985  389258 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:32:27.874633  389258 config.go:182] Loaded profile config "functional-333040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 02:32:27.875057  389258 cli_runner.go:164] Run: docker container inspect functional-333040 --format={{.State.Status}}
I1124 02:32:27.893352  389258 ssh_runner.go:195] Run: systemctl --version
I1124 02:32:27.893401  389258 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-333040
I1124 02:32:27.912352  389258 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/functional-333040/id_rsa Username:docker}
I1124 02:32:28.008790  389258 build_images.go:162] Building image from path: /tmp/build.3091982068.tar
I1124 02:32:28.008870  389258 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 02:32:28.016659  389258 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3091982068.tar
I1124 02:32:28.020187  389258 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3091982068.tar: stat -c "%s %y" /var/lib/minikube/build/build.3091982068.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3091982068.tar': No such file or directory
I1124 02:32:28.020220  389258 ssh_runner.go:362] scp /tmp/build.3091982068.tar --> /var/lib/minikube/build/build.3091982068.tar (3072 bytes)
I1124 02:32:28.037642  389258 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3091982068
I1124 02:32:28.046131  389258 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3091982068 -xf /var/lib/minikube/build/build.3091982068.tar
I1124 02:32:28.054711  389258 crio.go:315] Building image: /var/lib/minikube/build/build.3091982068
I1124 02:32:28.054766  389258 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-333040 /var/lib/minikube/build/build.3091982068 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1124 02:32:29.500031  389258 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-333040 /var/lib/minikube/build/build.3091982068 --cgroup-manager=cgroupfs: (1.445230521s)
I1124 02:32:29.500111  389258 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3091982068
I1124 02:32:29.508353  389258 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3091982068.tar
I1124 02:32:29.517918  389258 build_images.go:218] Built localhost/my-image:functional-333040 from /tmp/build.3091982068.tar
I1124 02:32:29.517950  389258 build_images.go:134] succeeded building to: functional-333040
I1124 02:32:29.517957  389258 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image ls
2025/11/24 02:32:33 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.199160517s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-333040
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-333040 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-333040 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-333040 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 383048: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-333040 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "461.548356ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "86.529919ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-333040 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-333040 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [21fedef4-d92d-4cd8-8487-afbb146dbabc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [21fedef4-d92d-4cd8-8487-afbb146dbabc] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003169622s
I1124 02:32:13.012391  349078 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "401.541872ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "63.514835ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image rm kicbase/echo-server:functional-333040 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-333040 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.97.2.102 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-333040 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-333040 /tmp/TestFunctionalparallelMountCmdany-port4047126558/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763951536642946971" to /tmp/TestFunctionalparallelMountCmdany-port4047126558/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763951536642946971" to /tmp/TestFunctionalparallelMountCmdany-port4047126558/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763951536642946971" to /tmp/TestFunctionalparallelMountCmdany-port4047126558/001/test-1763951536642946971
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-333040 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (273.451516ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 02:32:16.916715  349078 retry.go:31] will retry after 635.668382ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 02:32 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 02:32 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 02:32 test-1763951536642946971
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh cat /mount-9p/test-1763951536642946971
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-333040 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [23bf475f-ba09-483c-84f1-045433196bc7] Pending
helpers_test.go:352: "busybox-mount" [23bf475f-ba09-483c-84f1-045433196bc7] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [23bf475f-ba09-483c-84f1-045433196bc7] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [23bf475f-ba09-483c-84f1-045433196bc7] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.002705793s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-333040 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-333040 /tmp/TestFunctionalparallelMountCmdany-port4047126558/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-333040 /tmp/TestFunctionalparallelMountCmdspecific-port1138332972/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-333040 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (276.136952ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 02:32:22.730638  349078 retry.go:31] will retry after 320.906557ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-333040 /tmp/TestFunctionalparallelMountCmdspecific-port1138332972/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-333040 ssh "sudo umount -f /mount-9p": exit status 1 (267.266922ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-333040 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-333040 /tmp/TestFunctionalparallelMountCmdspecific-port1138332972/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-333040 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516723937/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-333040 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516723937/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-333040 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516723937/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-333040 ssh "findmnt -T" /mount1: exit status 1 (348.886879ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 02:32:24.404681  349078 retry.go:31] will retry after 471.743853ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-333040 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-333040 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516723937/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-333040 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516723937/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-333040 /tmp/TestFunctionalparallelMountCmdVerifyCleanup516723937/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-333040 service list: (1.693690979s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-333040 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-333040 service list -o json: (1.691033783s)
functional_test.go:1504: Took "1.691158264s" to run "out/minikube-linux-amd64 -p functional-333040 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-333040
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-333040
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-333040
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (163.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-724168 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m43.186141279s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (163.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-724168 kubectl -- rollout status deployment/busybox: (3.326463632s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-hx87r -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-mdtfw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-wtpjk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-hx87r -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-mdtfw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-wtpjk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-hx87r -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-mdtfw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-wtpjk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-hx87r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-hx87r -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-mdtfw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-mdtfw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-wtpjk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 kubectl -- exec busybox-7b57f96db7-wtpjk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-724168 node add --alsologtostderr -v 5: (25.422869686s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-724168 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp testdata/cp-test.txt ha-724168:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile168111668/001/cp-test_ha-724168.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168:/home/docker/cp-test.txt ha-724168-m02:/home/docker/cp-test_ha-724168_ha-724168-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m02 "sudo cat /home/docker/cp-test_ha-724168_ha-724168-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168:/home/docker/cp-test.txt ha-724168-m03:/home/docker/cp-test_ha-724168_ha-724168-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m03 "sudo cat /home/docker/cp-test_ha-724168_ha-724168-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168:/home/docker/cp-test.txt ha-724168-m04:/home/docker/cp-test_ha-724168_ha-724168-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m04 "sudo cat /home/docker/cp-test_ha-724168_ha-724168-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp testdata/cp-test.txt ha-724168-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile168111668/001/cp-test_ha-724168-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168-m02:/home/docker/cp-test.txt ha-724168:/home/docker/cp-test_ha-724168-m02_ha-724168.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168 "sudo cat /home/docker/cp-test_ha-724168-m02_ha-724168.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168-m02:/home/docker/cp-test.txt ha-724168-m03:/home/docker/cp-test_ha-724168-m02_ha-724168-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m03 "sudo cat /home/docker/cp-test_ha-724168-m02_ha-724168-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168-m02:/home/docker/cp-test.txt ha-724168-m04:/home/docker/cp-test_ha-724168-m02_ha-724168-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m04 "sudo cat /home/docker/cp-test_ha-724168-m02_ha-724168-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp testdata/cp-test.txt ha-724168-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile168111668/001/cp-test_ha-724168-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168-m03:/home/docker/cp-test.txt ha-724168:/home/docker/cp-test_ha-724168-m03_ha-724168.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168 "sudo cat /home/docker/cp-test_ha-724168-m03_ha-724168.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168-m03:/home/docker/cp-test.txt ha-724168-m02:/home/docker/cp-test_ha-724168-m03_ha-724168-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m02 "sudo cat /home/docker/cp-test_ha-724168-m03_ha-724168-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168-m03:/home/docker/cp-test.txt ha-724168-m04:/home/docker/cp-test_ha-724168-m03_ha-724168-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m04 "sudo cat /home/docker/cp-test_ha-724168-m03_ha-724168-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp testdata/cp-test.txt ha-724168-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile168111668/001/cp-test_ha-724168-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168-m04:/home/docker/cp-test.txt ha-724168:/home/docker/cp-test_ha-724168-m04_ha-724168.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168 "sudo cat /home/docker/cp-test_ha-724168-m04_ha-724168.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168-m04:/home/docker/cp-test.txt ha-724168-m02:/home/docker/cp-test_ha-724168-m04_ha-724168-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m02 "sudo cat /home/docker/cp-test_ha-724168-m04_ha-724168-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 cp ha-724168-m04:/home/docker/cp-test.txt ha-724168-m03:/home/docker/cp-test_ha-724168-m04_ha-724168-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 ssh -n ha-724168-m03 "sudo cat /home/docker/cp-test_ha-724168-m04_ha-724168-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-724168 node stop m02 --alsologtostderr -v 5: (12.575473912s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-724168 status --alsologtostderr -v 5: exit status 7 (677.086477ms)

                                                
                                                
-- stdout --
	ha-724168
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-724168-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-724168-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-724168-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:46:10.817758  413807 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:46:10.817906  413807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:46:10.817917  413807 out.go:374] Setting ErrFile to fd 2...
	I1124 02:46:10.817924  413807 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:46:10.818135  413807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:46:10.818318  413807 out.go:368] Setting JSON to false
	I1124 02:46:10.818348  413807 mustload.go:66] Loading cluster: ha-724168
	I1124 02:46:10.818462  413807 notify.go:221] Checking for updates...
	I1124 02:46:10.818655  413807 config.go:182] Loaded profile config "ha-724168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:46:10.818674  413807 status.go:174] checking status of ha-724168 ...
	I1124 02:46:10.819137  413807 cli_runner.go:164] Run: docker container inspect ha-724168 --format={{.State.Status}}
	I1124 02:46:10.838572  413807 status.go:371] ha-724168 host status = "Running" (err=<nil>)
	I1124 02:46:10.838594  413807 host.go:66] Checking if "ha-724168" exists ...
	I1124 02:46:10.838797  413807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-724168
	I1124 02:46:10.857200  413807 host.go:66] Checking if "ha-724168" exists ...
	I1124 02:46:10.857450  413807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 02:46:10.857507  413807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-724168
	I1124 02:46:10.875065  413807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/ha-724168/id_rsa Username:docker}
	I1124 02:46:10.971091  413807 ssh_runner.go:195] Run: systemctl --version
	I1124 02:46:10.977353  413807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 02:46:10.988785  413807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:46:11.048105  413807 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 02:46:11.038546548 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:46:11.048634  413807 kubeconfig.go:125] found "ha-724168" server: "https://192.168.49.254:8443"
	I1124 02:46:11.048669  413807 api_server.go:166] Checking apiserver status ...
	I1124 02:46:11.048716  413807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 02:46:11.060393  413807 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1244/cgroup
	W1124 02:46:11.068431  413807 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1244/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 02:46:11.068472  413807 ssh_runner.go:195] Run: ls
	I1124 02:46:11.071920  413807 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 02:46:11.076159  413807 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 02:46:11.076183  413807 status.go:463] ha-724168 apiserver status = Running (err=<nil>)
	I1124 02:46:11.076194  413807 status.go:176] ha-724168 status: &{Name:ha-724168 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:46:11.076213  413807 status.go:174] checking status of ha-724168-m02 ...
	I1124 02:46:11.076569  413807 cli_runner.go:164] Run: docker container inspect ha-724168-m02 --format={{.State.Status}}
	I1124 02:46:11.093493  413807 status.go:371] ha-724168-m02 host status = "Stopped" (err=<nil>)
	I1124 02:46:11.093510  413807 status.go:384] host is not running, skipping remaining checks
	I1124 02:46:11.093516  413807 status.go:176] ha-724168-m02 status: &{Name:ha-724168-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:46:11.093541  413807 status.go:174] checking status of ha-724168-m03 ...
	I1124 02:46:11.093785  413807 cli_runner.go:164] Run: docker container inspect ha-724168-m03 --format={{.State.Status}}
	I1124 02:46:11.110320  413807 status.go:371] ha-724168-m03 host status = "Running" (err=<nil>)
	I1124 02:46:11.110341  413807 host.go:66] Checking if "ha-724168-m03" exists ...
	I1124 02:46:11.110577  413807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-724168-m03
	I1124 02:46:11.127011  413807 host.go:66] Checking if "ha-724168-m03" exists ...
	I1124 02:46:11.127278  413807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 02:46:11.127322  413807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-724168-m03
	I1124 02:46:11.143390  413807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/ha-724168-m03/id_rsa Username:docker}
	I1124 02:46:11.237910  413807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 02:46:11.250164  413807 kubeconfig.go:125] found "ha-724168" server: "https://192.168.49.254:8443"
	I1124 02:46:11.250185  413807 api_server.go:166] Checking apiserver status ...
	I1124 02:46:11.250214  413807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 02:46:11.260466  413807 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1157/cgroup
	W1124 02:46:11.268509  413807 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1157/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 02:46:11.268557  413807 ssh_runner.go:195] Run: ls
	I1124 02:46:11.271909  413807 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 02:46:11.275701  413807 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 02:46:11.275721  413807 status.go:463] ha-724168-m03 apiserver status = Running (err=<nil>)
	I1124 02:46:11.275728  413807 status.go:176] ha-724168-m03 status: &{Name:ha-724168-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:46:11.275745  413807 status.go:174] checking status of ha-724168-m04 ...
	I1124 02:46:11.276056  413807 cli_runner.go:164] Run: docker container inspect ha-724168-m04 --format={{.State.Status}}
	I1124 02:46:11.293301  413807 status.go:371] ha-724168-m04 host status = "Running" (err=<nil>)
	I1124 02:46:11.293319  413807 host.go:66] Checking if "ha-724168-m04" exists ...
	I1124 02:46:11.293577  413807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-724168-m04
	I1124 02:46:11.310644  413807 host.go:66] Checking if "ha-724168-m04" exists ...
	I1124 02:46:11.310875  413807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 02:46:11.310930  413807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-724168-m04
	I1124 02:46:11.327232  413807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/ha-724168-m04/id_rsa Username:docker}
	I1124 02:46:11.422021  413807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 02:46:11.434346  413807 status.go:176] ha-724168-m04 status: &{Name:ha-724168-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (8.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 node start m02 --alsologtostderr -v 5
E1124 02:46:17.179846  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-724168 node start m02 --alsologtostderr -v 5: (7.39637052s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (8.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 stop --alsologtostderr -v 5
E1124 02:46:58.617125  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:46:58.623542  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:46:58.634856  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:46:58.656169  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:46:58.697458  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:46:58.778828  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:46:58.940279  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:46:59.261962  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:46:59.903974  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:47:01.185583  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:47:03.748479  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-724168 stop --alsologtostderr -v 5: (43.921606841s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 start --wait true --alsologtostderr -v 5
E1124 02:47:08.870288  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:47:19.112685  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:47:39.594293  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:47:40.245082  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-724168 start --wait true --alsologtostderr -v 5: (53.705701936s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (97.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-724168 node delete m03 --alsologtostderr -v 5: (9.67871385s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (47.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 stop --alsologtostderr -v 5
E1124 02:48:20.556493  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-724168 stop --alsologtostderr -v 5: (47.52544852s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-724168 status --alsologtostderr -v 5: exit status 7 (117.183252ms)

                                                
                                                
-- stdout --
	ha-724168
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-724168-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-724168-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:48:57.817026  427963 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:48:57.817300  427963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:48:57.817309  427963 out.go:374] Setting ErrFile to fd 2...
	I1124 02:48:57.817313  427963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:48:57.817482  427963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:48:57.817670  427963 out.go:368] Setting JSON to false
	I1124 02:48:57.817700  427963 mustload.go:66] Loading cluster: ha-724168
	I1124 02:48:57.817817  427963 notify.go:221] Checking for updates...
	I1124 02:48:57.818047  427963 config.go:182] Loaded profile config "ha-724168": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:48:57.818067  427963 status.go:174] checking status of ha-724168 ...
	I1124 02:48:57.818500  427963 cli_runner.go:164] Run: docker container inspect ha-724168 --format={{.State.Status}}
	I1124 02:48:57.838516  427963 status.go:371] ha-724168 host status = "Stopped" (err=<nil>)
	I1124 02:48:57.838534  427963 status.go:384] host is not running, skipping remaining checks
	I1124 02:48:57.838594  427963 status.go:176] ha-724168 status: &{Name:ha-724168 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:48:57.838623  427963 status.go:174] checking status of ha-724168-m02 ...
	I1124 02:48:57.838832  427963 cli_runner.go:164] Run: docker container inspect ha-724168-m02 --format={{.State.Status}}
	I1124 02:48:57.855316  427963 status.go:371] ha-724168-m02 host status = "Stopped" (err=<nil>)
	I1124 02:48:57.855331  427963 status.go:384] host is not running, skipping remaining checks
	I1124 02:48:57.855337  427963 status.go:176] ha-724168-m02 status: &{Name:ha-724168-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:48:57.855352  427963 status.go:174] checking status of ha-724168-m04 ...
	I1124 02:48:57.855561  427963 cli_runner.go:164] Run: docker container inspect ha-724168-m04 --format={{.State.Status}}
	I1124 02:48:57.872217  427963 status.go:371] ha-724168-m04 host status = "Stopped" (err=<nil>)
	I1124 02:48:57.872253  427963 status.go:384] host is not running, skipping remaining checks
	I1124 02:48:57.872263  427963 status.go:176] ha-724168-m04 status: &{Name:ha-724168-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (47.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1124 02:49:42.478103  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-724168 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (55.134309248s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 node add --control-plane --alsologtostderr -v 5
E1124 02:51:17.179430  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-724168 node add --control-plane --alsologtostderr -v 5: (1m23.526716654s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-724168 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.38s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-585498 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1124 02:51:58.616813  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-585498 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.380163096s)
--- PASS: TestJSONOutput/start/Command (40.38s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-585498 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-585498 --output=json --user=testUser: (6.120446593s)
--- PASS: TestJSONOutput/stop/Command (6.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-809635 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-809635 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.763978ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"164f575d-a0e7-4821-8bcb-88ff72caa89a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-809635] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b062238f-354f-41ca-b4b7-b3e20359f393","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21975"}}
	{"specversion":"1.0","id":"e6aae047-5a95-4bd3-beab-d2b558a24408","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c9aa2fba-7fbe-4800-840c-4ac6c094e1a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig"}}
	{"specversion":"1.0","id":"406c5d1f-f1ba-47c7-b14d-729c0c8f755e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube"}}
	{"specversion":"1.0","id":"46240c16-d151-4e22-9a6f-6d59ad6d16d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b2dce779-4f41-41ab-b64f-468e235553f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"70e4b19e-608d-4949-8368-3f4a6b819603","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-809635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-809635
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.48s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-271441 --network=
E1124 02:52:26.320096  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-271441 --network=: (26.375739451s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-271441" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-271441
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-271441: (2.089695953s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.48s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (24.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-272488 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-272488 --network=bridge: (22.973050281s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-272488" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-272488
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-272488: (1.966368052s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (24.96s)

                                                
                                    
x
+
TestKicExistingNetwork (25.34s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1124 02:53:15.756427  349078 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1124 02:53:15.773021  349078 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1124 02:53:15.773083  349078 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1124 02:53:15.773098  349078 cli_runner.go:164] Run: docker network inspect existing-network
W1124 02:53:15.788156  349078 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1124 02:53:15.788184  349078 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1124 02:53:15.788197  349078 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1124 02:53:15.788334  349078 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1124 02:53:15.805045  349078 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-58688175ab6d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:ae:9d:6a:fc:5c:13} reservation:<nil>}
I1124 02:53:15.805539  349078 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00187cf60}
I1124 02:53:15.805574  349078 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1124 02:53:15.805623  349078 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1124 02:53:15.853967  349078 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-991003 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-991003 --network=existing-network: (23.244834245s)
helpers_test.go:175: Cleaning up "existing-network-991003" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-991003
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-991003: (1.964458324s)
I1124 02:53:41.079923  349078 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (25.34s)

                                                
                                    
x
+
TestKicCustomSubnet (24.47s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-040011 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-040011 --subnet=192.168.60.0/24: (22.351777449s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-040011 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-040011" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-040011
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-040011: (2.10238316s)
--- PASS: TestKicCustomSubnet (24.47s)

                                                
                                    
x
+
TestKicStaticIP (25.46s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-403039 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-403039 --static-ip=192.168.200.200: (23.237038603s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-403039 ip
helpers_test.go:175: Cleaning up "static-ip-403039" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-403039
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-403039: (2.075719776s)
--- PASS: TestKicStaticIP (25.46s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (50.76s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-976278 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-976278 --driver=docker  --container-runtime=crio: (22.494722909s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-978565 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-978565 --driver=docker  --container-runtime=crio: (22.466439704s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-976278
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-978565
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-978565" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-978565
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-978565: (2.291846265s)
helpers_test.go:175: Cleaning up "first-976278" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-976278
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-976278: (2.301430184s)
--- PASS: TestMinikubeProfile (50.76s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-692349 --memory=3072 --mount-string /tmp/TestMountStartserial2659868968/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-692349 --memory=3072 --mount-string /tmp/TestMountStartserial2659868968/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.779903345s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-692349 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-706532 --memory=3072 --mount-string /tmp/TestMountStartserial2659868968/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-706532 --memory=3072 --mount-string /tmp/TestMountStartserial2659868968/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.698112392s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-706532 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-692349 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-692349 --alsologtostderr -v=5: (1.637532999s)
--- PASS: TestMountStart/serial/DeleteFirst (1.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-706532 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-706532
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-706532: (1.250459573s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.17s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-706532
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-706532: (6.171405128s)
--- PASS: TestMountStart/serial/RestartStopped (7.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-706532 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-951968 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1124 02:56:17.179814  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:56:58.616715  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-951968 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m35.654034481s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.12s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (2.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-951968 -- rollout status deployment/busybox: (1.481839047s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- exec busybox-7b57f96db7-76bj4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- exec busybox-7b57f96db7-kmd4x -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- exec busybox-7b57f96db7-76bj4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- exec busybox-7b57f96db7-kmd4x -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- exec busybox-7b57f96db7-76bj4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- exec busybox-7b57f96db7-kmd4x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (2.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- exec busybox-7b57f96db7-76bj4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- exec busybox-7b57f96db7-76bj4 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- exec busybox-7b57f96db7-kmd4x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-951968 -- exec busybox-7b57f96db7-kmd4x -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (26.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-951968 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-951968 -v=5 --alsologtostderr: (25.665744844s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (26.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-951968 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 cp testdata/cp-test.txt multinode-951968:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 cp multinode-951968:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4163872741/001/cp-test_multinode-951968.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 cp multinode-951968:/home/docker/cp-test.txt multinode-951968-m02:/home/docker/cp-test_multinode-951968_multinode-951968-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968-m02 "sudo cat /home/docker/cp-test_multinode-951968_multinode-951968-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 cp multinode-951968:/home/docker/cp-test.txt multinode-951968-m03:/home/docker/cp-test_multinode-951968_multinode-951968-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968-m03 "sudo cat /home/docker/cp-test_multinode-951968_multinode-951968-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 cp testdata/cp-test.txt multinode-951968-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 cp multinode-951968-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4163872741/001/cp-test_multinode-951968-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 cp multinode-951968-m02:/home/docker/cp-test.txt multinode-951968:/home/docker/cp-test_multinode-951968-m02_multinode-951968.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968 "sudo cat /home/docker/cp-test_multinode-951968-m02_multinode-951968.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 cp multinode-951968-m02:/home/docker/cp-test.txt multinode-951968-m03:/home/docker/cp-test_multinode-951968-m02_multinode-951968-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968-m03 "sudo cat /home/docker/cp-test_multinode-951968-m02_multinode-951968-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 cp testdata/cp-test.txt multinode-951968-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 cp multinode-951968-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile4163872741/001/cp-test_multinode-951968-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 cp multinode-951968-m03:/home/docker/cp-test.txt multinode-951968:/home/docker/cp-test_multinode-951968-m03_multinode-951968.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968 "sudo cat /home/docker/cp-test_multinode-951968-m03_multinode-951968.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 cp multinode-951968-m03:/home/docker/cp-test.txt multinode-951968-m02:/home/docker/cp-test_multinode-951968-m03_multinode-951968-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 ssh -n multinode-951968-m02 "sudo cat /home/docker/cp-test_multinode-951968-m03_multinode-951968-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.59s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-951968 node stop m03: (1.265169342s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-951968 status: exit status 7 (481.682803ms)

                                                
                                                
-- stdout --
	multinode-951968
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-951968-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-951968-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-951968 status --alsologtostderr: exit status 7 (480.294385ms)

                                                
                                                
-- stdout --
	multinode-951968
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-951968-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-951968-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:58:05.290242  487969 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:58:05.290507  487969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:58:05.290518  487969 out.go:374] Setting ErrFile to fd 2...
	I1124 02:58:05.290525  487969 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:58:05.290728  487969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:58:05.290921  487969 out.go:368] Setting JSON to false
	I1124 02:58:05.290957  487969 mustload.go:66] Loading cluster: multinode-951968
	I1124 02:58:05.291076  487969 notify.go:221] Checking for updates...
	I1124 02:58:05.291412  487969 config.go:182] Loaded profile config "multinode-951968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:58:05.291430  487969 status.go:174] checking status of multinode-951968 ...
	I1124 02:58:05.291833  487969 cli_runner.go:164] Run: docker container inspect multinode-951968 --format={{.State.Status}}
	I1124 02:58:05.310740  487969 status.go:371] multinode-951968 host status = "Running" (err=<nil>)
	I1124 02:58:05.310770  487969 host.go:66] Checking if "multinode-951968" exists ...
	I1124 02:58:05.311052  487969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951968
	I1124 02:58:05.328874  487969 host.go:66] Checking if "multinode-951968" exists ...
	I1124 02:58:05.329118  487969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 02:58:05.329194  487969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951968
	I1124 02:58:05.345755  487969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/multinode-951968/id_rsa Username:docker}
	I1124 02:58:05.440720  487969 ssh_runner.go:195] Run: systemctl --version
	I1124 02:58:05.446859  487969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 02:58:05.458532  487969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:58:05.511795  487969 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-24 02:58:05.501750251 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 02:58:05.512336  487969 kubeconfig.go:125] found "multinode-951968" server: "https://192.168.67.2:8443"
	I1124 02:58:05.512367  487969 api_server.go:166] Checking apiserver status ...
	I1124 02:58:05.512400  487969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 02:58:05.523733  487969 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1244/cgroup
	W1124 02:58:05.531468  487969 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1244/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 02:58:05.531513  487969 ssh_runner.go:195] Run: ls
	I1124 02:58:05.534867  487969 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1124 02:58:05.538813  487969 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1124 02:58:05.538832  487969 status.go:463] multinode-951968 apiserver status = Running (err=<nil>)
	I1124 02:58:05.538840  487969 status.go:176] multinode-951968 status: &{Name:multinode-951968 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:58:05.538854  487969 status.go:174] checking status of multinode-951968-m02 ...
	I1124 02:58:05.539107  487969 cli_runner.go:164] Run: docker container inspect multinode-951968-m02 --format={{.State.Status}}
	I1124 02:58:05.556648  487969 status.go:371] multinode-951968-m02 host status = "Running" (err=<nil>)
	I1124 02:58:05.556666  487969 host.go:66] Checking if "multinode-951968-m02" exists ...
	I1124 02:58:05.556881  487969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-951968-m02
	I1124 02:58:05.573078  487969 host.go:66] Checking if "multinode-951968-m02" exists ...
	I1124 02:58:05.573309  487969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 02:58:05.573355  487969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-951968-m02
	I1124 02:58:05.589112  487969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21975-345525/.minikube/machines/multinode-951968-m02/id_rsa Username:docker}
	I1124 02:58:05.683387  487969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 02:58:05.695083  487969 status.go:176] multinode-951968-m02 status: &{Name:multinode-951968-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:58:05.695109  487969 status.go:174] checking status of multinode-951968-m03 ...
	I1124 02:58:05.695334  487969 cli_runner.go:164] Run: docker container inspect multinode-951968-m03 --format={{.State.Status}}
	I1124 02:58:05.712115  487969 status.go:371] multinode-951968-m03 host status = "Stopped" (err=<nil>)
	I1124 02:58:05.712132  487969 status.go:384] host is not running, skipping remaining checks
	I1124 02:58:05.712138  487969 status.go:176] multinode-951968-m03 status: &{Name:multinode-951968-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.23s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-951968 node start m03 -v=5 --alsologtostderr: (6.318987335s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (57.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-951968
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-951968
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-951968: (31.249175411s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-951968 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-951968 --wait=true -v=5 --alsologtostderr: (26.55944881s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-951968
--- PASS: TestMultiNode/serial/RestartKeepsNodes (57.93s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (4.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-951968 node delete m03: (4.361997159s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (4.94s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (17.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-951968 stop: (17.395274912s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-951968 status: exit status 7 (95.216776ms)

                                                
                                                
-- stdout --
	multinode-951968
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-951968-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-951968 status --alsologtostderr: exit status 7 (96.527529ms)

                                                
                                                
-- stdout --
	multinode-951968
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-951968-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 02:59:33.136880  496908 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:59:33.137021  496908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:59:33.137031  496908 out.go:374] Setting ErrFile to fd 2...
	I1124 02:59:33.137036  496908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:59:33.137241  496908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 02:59:33.137385  496908 out.go:368] Setting JSON to false
	I1124 02:59:33.137412  496908 mustload.go:66] Loading cluster: multinode-951968
	I1124 02:59:33.137537  496908 notify.go:221] Checking for updates...
	I1124 02:59:33.137735  496908 config.go:182] Loaded profile config "multinode-951968": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 02:59:33.137755  496908 status.go:174] checking status of multinode-951968 ...
	I1124 02:59:33.138208  496908 cli_runner.go:164] Run: docker container inspect multinode-951968 --format={{.State.Status}}
	I1124 02:59:33.158041  496908 status.go:371] multinode-951968 host status = "Stopped" (err=<nil>)
	I1124 02:59:33.158072  496908 status.go:384] host is not running, skipping remaining checks
	I1124 02:59:33.158080  496908 status.go:176] multinode-951968 status: &{Name:multinode-951968 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 02:59:33.158127  496908 status.go:174] checking status of multinode-951968-m02 ...
	I1124 02:59:33.158396  496908 cli_runner.go:164] Run: docker container inspect multinode-951968-m02 --format={{.State.Status}}
	I1124 02:59:33.175502  496908 status.go:371] multinode-951968-m02 host status = "Stopped" (err=<nil>)
	I1124 02:59:33.175519  496908 status.go:384] host is not running, skipping remaining checks
	I1124 02:59:33.175525  496908 status.go:176] multinode-951968-m02 status: &{Name:multinode-951968-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (17.59s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (41.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-951968 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-951968 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (41.225489161s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-951968 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (41.81s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-951968
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-951968-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-951968-m02 --driver=docker  --container-runtime=crio: exit status 14 (74.254325ms)

                                                
                                                
-- stdout --
	* [multinode-951968-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-951968-m02' is duplicated with machine name 'multinode-951968-m02' in profile 'multinode-951968'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-951968-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-951968-m03 --driver=docker  --container-runtime=crio: (20.700886896s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-951968
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-951968: exit status 80 (290.607168ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-951968 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-951968-m03 already exists in multinode-951968-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-951968-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-951968-m03: (2.287488145s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.41s)

                                                
                                    
x
+
TestPreload (84.72s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-122723 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1124 03:01:17.179450  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-122723 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (46.502737238s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-122723 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-122723
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-122723: (5.816773228s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-122723 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1124 03:01:58.616579  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-122723 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (28.931283919s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-122723 image list
helpers_test.go:175: Cleaning up "test-preload-122723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-122723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-122723: (2.350688329s)
--- PASS: TestPreload (84.72s)

                                                
                                    
x
+
TestScheduledStopUnix (99.9s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-029934 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-029934 --memory=3072 --driver=docker  --container-runtime=crio: (23.267316463s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-029934 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 03:02:30.576220  513859 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:02:30.576316  513859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:02:30.576324  513859 out.go:374] Setting ErrFile to fd 2...
	I1124 03:02:30.576328  513859 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:02:30.576577  513859 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:02:30.576816  513859 out.go:368] Setting JSON to false
	I1124 03:02:30.576919  513859 mustload.go:66] Loading cluster: scheduled-stop-029934
	I1124 03:02:30.577245  513859 config.go:182] Loaded profile config "scheduled-stop-029934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:02:30.577326  513859 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/config.json ...
	I1124 03:02:30.577516  513859 mustload.go:66] Loading cluster: scheduled-stop-029934
	I1124 03:02:30.577652  513859 config.go:182] Loaded profile config "scheduled-stop-029934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-029934 -n scheduled-stop-029934
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-029934 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 03:02:30.967047  514008 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:02:30.967309  514008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:02:30.967321  514008 out.go:374] Setting ErrFile to fd 2...
	I1124 03:02:30.967325  514008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:02:30.967578  514008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:02:30.967853  514008 out.go:368] Setting JSON to false
	I1124 03:02:30.968059  514008 daemonize_unix.go:73] killing process 513893 as it is an old scheduled stop
	I1124 03:02:30.968175  514008 mustload.go:66] Loading cluster: scheduled-stop-029934
	I1124 03:02:30.968516  514008 config.go:182] Loaded profile config "scheduled-stop-029934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:02:30.968581  514008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/config.json ...
	I1124 03:02:30.968761  514008 mustload.go:66] Loading cluster: scheduled-stop-029934
	I1124 03:02:30.968878  514008 config.go:182] Loaded profile config "scheduled-stop-029934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1124 03:02:30.973506  349078 retry.go:31] will retry after 67.033µs: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:30.974690  349078 retry.go:31] will retry after 107.383µs: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:30.975830  349078 retry.go:31] will retry after 290.057µs: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:30.976979  349078 retry.go:31] will retry after 364.923µs: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:30.978117  349078 retry.go:31] will retry after 325.565µs: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:30.979267  349078 retry.go:31] will retry after 1.058863ms: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:30.980420  349078 retry.go:31] will retry after 957.771µs: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:30.981548  349078 retry.go:31] will retry after 1.208529ms: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:30.983788  349078 retry.go:31] will retry after 3.196449ms: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:30.987991  349078 retry.go:31] will retry after 1.957083ms: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:30.990180  349078 retry.go:31] will retry after 7.669535ms: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:30.998384  349078 retry.go:31] will retry after 5.807315ms: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:31.004589  349078 retry.go:31] will retry after 14.898147ms: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:31.019794  349078 retry.go:31] will retry after 9.767787ms: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:31.030000  349078 retry.go:31] will retry after 17.187883ms: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
I1124 03:02:31.048221  349078 retry.go:31] will retry after 40.816072ms: open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-029934 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-029934 -n scheduled-stop-029934
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-029934
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-029934 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 03:02:56.826261  514667 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:02:56.826369  514667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:02:56.826376  514667 out.go:374] Setting ErrFile to fd 2...
	I1124 03:02:56.826382  514667 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:02:56.826603  514667 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:02:56.826895  514667 out.go:368] Setting JSON to false
	I1124 03:02:56.827003  514667 mustload.go:66] Loading cluster: scheduled-stop-029934
	I1124 03:02:56.827340  514667 config.go:182] Loaded profile config "scheduled-stop-029934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:02:56.827452  514667 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/scheduled-stop-029934/config.json ...
	I1124 03:02:56.827658  514667 mustload.go:66] Loading cluster: scheduled-stop-029934
	I1124 03:02:56.827794  514667 config.go:182] Loaded profile config "scheduled-stop-029934": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1124 03:03:21.683696  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-029934
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-029934: exit status 7 (81.166586ms)

                                                
                                                
-- stdout --
	scheduled-stop-029934
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-029934 -n scheduled-stop-029934
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-029934 -n scheduled-stop-029934: exit status 7 (76.440266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-029934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-029934
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-029934: (5.150916545s)
--- PASS: TestScheduledStopUnix (99.90s)

                                                
                                    
x
+
TestInsufficientStorage (9.39s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-628185 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-628185 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (6.973406747s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"04ce7cbf-12da-4b54-8e3d-eaf8081fc73c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-628185] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"32925f61-9c08-467a-a9b5-944158e92272","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21975"}}
	{"specversion":"1.0","id":"a3494b43-3854-44b3-8f14-38c48e97867a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"8eb9284d-bcee-4260-aec6-4b24572bef17","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig"}}
	{"specversion":"1.0","id":"3c9ff86b-1c39-410f-949a-5ee0b3f55307","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube"}}
	{"specversion":"1.0","id":"7c25eea6-df0d-4b93-b4f5-528b617f16ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"d7c89947-79d9-4caf-8aee-3b046dd1b12d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4837cb2e-4bda-429d-9bdd-17b7cd59ba08","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3749edeb-e511-4d58-b37f-f8fee3070e69","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"3255561b-0fd0-415f-a9ce-50cd4cbb7269","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"359ccff3-a7a2-41fb-a98b-a8f682d19136","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"4727cd4c-050a-4024-a33d-9cab0e7c05f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-628185\" primary control-plane node in \"insufficient-storage-628185\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ff2bbc38-e25f-4b18-a04c-b408fdc5c72d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763935653-21975 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2aeb63ae-35a8-4560-8a55-cfc2514a68f4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"170f6bfa-782b-45a9-843a-a2bcddb557bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-628185 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-628185 --output=json --layout=cluster: exit status 7 (283.412581ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-628185","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-628185","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 03:03:54.390156  517206 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-628185" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-628185 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-628185 --output=json --layout=cluster: exit status 7 (278.00298ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-628185","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-628185","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 03:03:54.669186  517333 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-628185" does not appear in /home/jenkins/minikube-integration/21975-345525/kubeconfig
	E1124 03:03:54.679230  517333 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/insufficient-storage-628185/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-628185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-628185
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-628185: (1.858730659s)
--- PASS: TestInsufficientStorage (9.39s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (43.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3348351347 start -p running-upgrade-718208 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1124 03:06:58.616598  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3348351347 start -p running-upgrade-718208 --memory=3072 --vm-driver=docker  --container-runtime=crio: (20.873360443s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-718208 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-718208 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (20.127126746s)
helpers_test.go:175: Cleaning up "running-upgrade-718208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-718208
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-718208: (1.955674074s)
--- PASS: TestRunningBinaryUpgrade (43.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (309.74s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-034173 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-034173 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.735481563s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-034173
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-034173: (1.31554216s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-034173 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-034173 status --format={{.Host}}: exit status 7 (92.43107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-034173 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1124 03:06:17.180344  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-034173 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.33068836s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-034173 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-034173 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-034173 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (85.093489ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-034173] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-034173
	    minikube start -p kubernetes-upgrade-034173 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0341732 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-034173 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-034173 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-034173 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.296377192s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-034173" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-034173
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-034173: (2.807189435s)
--- PASS: TestKubernetesUpgrade (309.74s)

                                                
                                    
x
+
TestMissingContainerUpgrade (109.79s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3297924962 start -p missing-upgrade-033923 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3297924962 start -p missing-upgrade-033923 --memory=3072 --driver=docker  --container-runtime=crio: (1m9.780738782s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-033923
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-033923
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-033923 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-033923 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.289918516s)
helpers_test.go:175: Cleaning up "missing-upgrade-033923" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-033923
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-033923: (2.359912301s)
--- PASS: TestMissingContainerUpgrade (109.79s)

                                                
                                    
x
+
TestPause/serial/Start (47.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-530927 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-530927 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (47.934145953s)
--- PASS: TestPause/serial/Start (47.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565297 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-565297 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (99.158886ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-565297] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565297 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1124 03:04:20.246700  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-565297 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.293392006s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-565297 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565297 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-565297 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (16.203946225s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-565297 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-565297 status -o json: exit status 2 (377.583266ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-565297","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-565297
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-565297: (2.07465204s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.66s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (8.19s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-530927 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-530927 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (8.179676069s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (8.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565297 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-565297 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.523191481s)
--- PASS: TestNoKubernetes/serial/Start (7.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21975-345525/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-565297 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-565297 "sudo systemctl is-active --quiet service kubelet": exit status 1 (312.436108ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-565297
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-565297: (1.260530739s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.91s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-565297 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-565297 --driver=docker  --container-runtime=crio: (6.907861811s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.91s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-565297 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-565297 "sudo systemctl is-active --quiet service kubelet": exit status 1 (355.406907ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-965704 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-965704 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (195.193769ms)

                                                
                                                
-- stdout --
	* [false-965704] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:05:05.106039  542425 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:05:05.106170  542425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:05:05.106180  542425 out.go:374] Setting ErrFile to fd 2...
	I1124 03:05:05.106187  542425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:05:05.106493  542425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-345525/.minikube/bin
	I1124 03:05:05.107141  542425 out.go:368] Setting JSON to false
	I1124 03:05:05.108494  542425 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6452,"bootTime":1763947053,"procs":268,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 03:05:05.108569  542425 start.go:143] virtualization: kvm guest
	I1124 03:05:05.110659  542425 out.go:179] * [false-965704] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 03:05:05.111665  542425 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:05:05.111668  542425 notify.go:221] Checking for updates...
	I1124 03:05:05.114349  542425 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:05:05.115640  542425 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-345525/kubeconfig
	I1124 03:05:05.116670  542425 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-345525/.minikube
	I1124 03:05:05.117782  542425 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 03:05:05.118848  542425 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:05:05.120991  542425 config.go:182] Loaded profile config "cert-expiration-062725": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:05:05.121153  542425 config.go:182] Loaded profile config "force-systemd-flag-597158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 03:05:05.121297  542425 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:05:05.149663  542425 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1124 03:05:05.149794  542425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:05:05.217359  542425 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-24 03:05:05.205796227 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652072448 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 03:05:05.217498  542425 docker.go:319] overlay module found
	I1124 03:05:05.219339  542425 out.go:179] * Using the docker driver based on user configuration
	I1124 03:05:05.220334  542425 start.go:309] selected driver: docker
	I1124 03:05:05.220353  542425 start.go:927] validating driver "docker" against <nil>
	I1124 03:05:05.220364  542425 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:05:05.222139  542425 out.go:203] 
	W1124 03:05:05.223082  542425 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1124 03:05:05.224155  542425 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-965704 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-965704

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-965704

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-965704

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-965704

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-965704

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-965704

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-965704

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-965704

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-965704

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-965704

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-965704

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-965704" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-965704" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:04:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-062725
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:05:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: force-systemd-flag-597158
contexts:
- context:
cluster: cert-expiration-062725
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:04:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-062725
name: cert-expiration-062725
- context:
cluster: force-systemd-flag-597158
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:05:07 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: force-systemd-flag-597158
name: force-systemd-flag-597158
current-context: force-systemd-flag-597158
kind: Config
users:
- name: cert-expiration-062725
user:
client-certificate: /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/cert-expiration-062725/client.crt
client-key: /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/cert-expiration-062725/client.key
- name: force-systemd-flag-597158
user:
client-certificate: /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/client.crt
client-key: /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/force-systemd-flag-597158/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-965704

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-965704"

                                                
                                                
----------------------- debugLogs end: false-965704 [took: 5.390803428s] --------------------------------
helpers_test.go:175: Cleaning up "false-965704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-965704
--- PASS: TestNetworkPlugins/group/false (5.76s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.93s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.849977698 start -p stopped-upgrade-030919 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.849977698 start -p stopped-upgrade-030919 --memory=3072 --vm-driver=docker  --container-runtime=crio: (1m14.2780103s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.849977698 -p stopped-upgrade-030919 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.849977698 -p stopped-upgrade-030919 stop: (11.844533954s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-030919 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-030919 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (13.810379551s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (99.93s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-030919
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (41.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-965704 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-965704 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (41.35661024s)
--- PASS: TestNetworkPlugins/group/auto/Start (41.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (40.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-965704 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-965704 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (40.578058008s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (40.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-965704 "pgrep -a kubelet"
I1124 03:07:46.628087  349078 config.go:182] Loaded profile config "auto-965704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-965704 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6jj4k" [15a26497-1da1-457c-b4c5-6bf908929b40] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6jj4k" [15a26497-1da1-457c-b4c5-6bf908929b40] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004045381s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-965704 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-965704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-965704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (54.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-965704 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-965704 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (54.496881209s)
--- PASS: TestNetworkPlugins/group/calico/Start (54.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (51.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-965704 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-965704 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (51.411720982s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (51.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-tlfkb" [542d5d21-b019-4a3a-9f1c-d78ea9675c4d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004184934s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-965704 "pgrep -a kubelet"
I1124 03:08:25.706225  349078 config.go:182] Loaded profile config "kindnet-965704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-965704 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wgnj7" [d3fe4c79-0545-4e21-9864-a9c4b39e4032] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wgnj7" [d3fe4c79-0545-4e21-9864-a9c4b39e4032] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004245747s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-965704 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-965704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-965704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-fxjkg" [71fe7f2d-c5b9-4fae-b3f9-32895fc519f4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004447487s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (63.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-965704 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-965704 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m3.309494438s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (63.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-965704 "pgrep -a kubelet"
I1124 03:08:57.158700  349078 config.go:182] Loaded profile config "calico-965704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-965704 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jvr9p" [a04321cc-968c-4118-8b4b-6a4fb16d7145] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jvr9p" [a04321cc-968c-4118-8b4b-6a4fb16d7145] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004750336s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-965704 "pgrep -a kubelet"
I1124 03:09:07.267052  349078 config.go:182] Loaded profile config "custom-flannel-965704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-965704 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cw7dx" [e46604ab-bd01-4bd5-8c3f-cff9d624ebeb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cw7dx" [e46604ab-bd01-4bd5-8c3f-cff9d624ebeb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.003348782s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-965704 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-965704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-965704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-965704 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-965704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-965704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (48.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-965704 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-965704 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (48.113337976s)
--- PASS: TestNetworkPlugins/group/flannel/Start (48.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (36.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-965704 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-965704 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (36.280949888s)
--- PASS: TestNetworkPlugins/group/bridge/Start (36.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-965704 "pgrep -a kubelet"
I1124 03:09:59.061209  349078 config.go:182] Loaded profile config "enable-default-cni-965704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-965704 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d8h6c" [5dd97754-0577-49e3-b054-0eca319f008c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d8h6c" [5dd97754-0577-49e3-b054-0eca319f008c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.00343472s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-965704 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-965704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-965704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-965704 "pgrep -a kubelet"
I1124 03:10:13.413466  349078 config.go:182] Loaded profile config "bridge-965704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-965704 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9p8m9" [4f05cbf3-1ba5-4555-b466-84736b820e34] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9p8m9" [4f05cbf3-1ba5-4555-b466-84736b820e34] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004390135s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-pbcq8" [f69c310d-26d4-4142-9373-1cd53b9a9159] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003916732s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-965704 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-965704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-965704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-965704 "pgrep -a kubelet"
I1124 03:10:23.575048  349078 config.go:182] Loaded profile config "flannel-965704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-965704 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-q4892" [5bf3aa23-5e47-4a85-8e42-0110bb3ad1cb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-q4892" [5bf3aa23-5e47-4a85-8e42-0110bb3ad1cb] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.002822222s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (53.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-579951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-579951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (53.756365782s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (53.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-965704 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-965704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-965704 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-603010 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-603010 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (55.725665905s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.369361445s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (48.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (31.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1124 03:11:17.180186  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/addons-831846/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (31.699411803s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (31.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-579951 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b61ae335-3755-4f88-9305-030d7d7fd2e7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b61ae335-3755-4f88-9305-030d7d7fd2e7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.00394628s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-579951 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (18.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-438041 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-438041 --alsologtostderr -v=3: (18.024764932s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (18.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15.99s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-579951 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-579951 --alsologtostderr -v=3: (15.987569933s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (15.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-993813 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3399a559-d753-41f3-86bb-203b96faca7f] Pending
helpers_test.go:352: "busybox" [3399a559-d753-41f3-86bb-203b96faca7f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3399a559-d753-41f3-86bb-203b96faca7f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 7.00265401s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-993813 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (7.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-603010 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0d4cbf8f-cfc7-4e80-badf-b1b840617547] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0d4cbf8f-cfc7-4e80-badf-b1b840617547] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.003732119s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-603010 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (18.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-993813 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-993813 --alsologtostderr -v=3: (18.126750659s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (18.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-603010 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-603010 --alsologtostderr -v=3: (18.25384297s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-579951 -n old-k8s-version-579951
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-579951 -n old-k8s-version-579951: exit status 7 (91.165646ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-579951 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-579951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-579951 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (48.326826401s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-579951 -n old-k8s-version-579951
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438041 -n newest-cni-438041
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438041 -n newest-cni-438041: exit status 7 (89.074345ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-438041 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1124 03:11:58.616172  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/functional-333040/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-438041 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.535598253s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-438041 -n newest-cni-438041
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.87s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-438041 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993813 -n default-k8s-diff-port-993813
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993813 -n default-k8s-diff-port-993813: exit status 7 (82.22593ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-993813 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-993813 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.619410009s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-993813 -n default-k8s-diff-port-993813
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (49.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603010 -n no-preload-603010
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603010 -n no-preload-603010: exit status 7 (88.155818ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-603010 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.63s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-603010 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-603010 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.314440817s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-603010 -n no-preload-603010
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.63s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.245429564s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8b2mk" [36c6705a-eceb-43a7-9fce-96446385e0e3] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003288857s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-8b2mk" [36c6705a-eceb-43a7-9fce-96446385e0e3] Running
E1124 03:12:46.811149  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/auto-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:12:46.817575  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/auto-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:12:46.828930  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/auto-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:12:46.850272  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/auto-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:12:46.891581  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/auto-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:12:46.973750  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/auto-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:12:47.135266  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/auto-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:12:47.457456  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/auto-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002942498s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-579951 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-579951 image list --format=json
E1124 03:12:48.099196  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/auto-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-284604 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [84f9c221-0f52-448e-88a0-6d2e90c436b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [84f9c221-0f52-448e-88a0-6d2e90c436b2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003166635s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-284604 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6tmlg" [ed7bf0d2-2650-4552-8f7b-26df99b9dda6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003310351s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sfsh5" [4271eb57-8093-4453-8aad-0faa0f0d1c1e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003260344s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-6tmlg" [ed7bf0d2-2650-4552-8f7b-26df99b9dda6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003558182s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-993813 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-sfsh5" [4271eb57-8093-4453-8aad-0faa0f0d1c1e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003694884s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-603010 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-284604 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-284604 --alsologtostderr -v=3: (16.196485993s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-993813 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-603010 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-284604 -n embed-certs-284604
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-284604 -n embed-certs-284604: exit status 7 (76.179862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-284604 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1124 03:13:21.914201  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/kindnet-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:24.476300  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/kindnet-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:27.788284  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/auto-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:29.599058  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/kindnet-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:39.840601  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/kindnet-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:50.847919  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/calico-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:50.854281  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/calico-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:50.865602  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/calico-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:50.886931  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/calico-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:50.928268  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/calico-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:51.009609  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/calico-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:51.171086  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/calico-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:51.492740  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/calico-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:52.134727  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/calico-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:53.416313  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/calico-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:13:55.978171  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/calico-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:14:00.322470  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/kindnet-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:14:01.099632  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/calico-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:14:07.444643  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/custom-flannel-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:14:07.451020  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/custom-flannel-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:14:07.462345  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/custom-flannel-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:14:07.483663  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/custom-flannel-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:14:07.524935  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/custom-flannel-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:14:07.606303  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/custom-flannel-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:14:07.768346  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/custom-flannel-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:14:08.089605  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/custom-flannel-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-284604 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.720800698s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-284604 -n embed-certs-284604
E1124 03:14:08.731217  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/custom-flannel-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:14:08.749529  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/auto-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fbjrx" [f370dc97-efc4-4903-a62a-e6af42b5f4f9] Running
E1124 03:14:10.012545  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/custom-flannel-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:14:11.340995  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/calico-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:14:12.573869  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/custom-flannel-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002933888s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fbjrx" [f370dc97-efc4-4903-a62a-e6af42b5f4f9] Running
E1124 03:14:17.695178  349078 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/custom-flannel-965704/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003285526s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-284604 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-284604 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-965704 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-965704

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-965704

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-965704

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-965704

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-965704

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-965704

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-965704

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-965704

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-965704

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-965704

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-965704

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-965704" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-965704" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:04:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-062725
contexts:
- context:
cluster: cert-expiration-062725
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:04:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-062725
name: cert-expiration-062725
current-context: ""
kind: Config
users:
- name: cert-expiration-062725
user:
client-certificate: /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/cert-expiration-062725/client.crt
client-key: /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/cert-expiration-062725/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-965704

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-965704"

                                                
                                                
----------------------- debugLogs end: kubenet-965704 [took: 3.801194619s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-965704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-965704
--- SKIP: TestNetworkPlugins/group/kubenet (3.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-965704 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-965704" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-345525/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:04:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: cert-expiration-062725
contexts:
- context:
cluster: cert-expiration-062725
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:04:47 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-062725
name: cert-expiration-062725
current-context: ""
kind: Config
users:
- name: cert-expiration-062725
user:
client-certificate: /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/cert-expiration-062725/client.crt
client-key: /home/jenkins/minikube-integration/21975-345525/.minikube/profiles/cert-expiration-062725/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-965704

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-965704" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-965704"

                                                
                                                
----------------------- debugLogs end: cilium-965704 [took: 4.169095921s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-965704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-965704
--- SKIP: TestNetworkPlugins/group/cilium (4.38s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-242597" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-242597
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
Copied to clipboard